Analyzing the energy efficient path in Wireless Sensor Network using Machine Learning

Tirtharaj Sapkota, Bobby Sharma

Abstract


As the sensor nodes are energy constrained, an important factor for successful implementation of a Wireless Sensor Network (WSN) is designing energy efficient routing protocols and improving its lifetime. Network life time has been described in many ways such as   the time when the network lost its connectivity or the time when the first node gets disconnected. Whatever may be the description, the main focus of many researchers is to design algorithms that enable the network to perform continuously for a longer duration. So, improving the energy efficiency and increasing the network lifetime are the two key issues in WSN routing. Because of the intelligent nature and learning capacity, reinforcement learning (RL) algorithms are very suitable for complex distributed problems such as routing in WSN. RL is a subclass of Machine Learning techniques.  It can be used to choose the best forwarding node for transmitting data in multipath routing protocols. A survey has been made in this paper regarding the implementation of RL techniques to solve routing problems in WSN. Also, an algorithm has been proposed which is a modified version of original Directed Diffusion (DD) protocol. The proposed algorithm uses Q-learning technique which is a special class of RL. Also, the significance of balancing the exploration and exploitation rate during path finding in Q-learning has been demonstrated using an experiment implemented in python. The result of the experiment shows that if exploration-exploitation rate is properly balanced, it always yields an optimum value of the reward and thus path found from source to the destination is efficient.

Full Text:

PDF

References


Akyildiz I F, Su W, Sankarasubramaniam Y, Cayirci E.(2002), “A survey on sensor networks”, IEEE Communications Magazine 40 (8) , pp. 104–112.

Rault T, Bouabdallah A, Challal Y(2014), „„Energy efficiency in wireless sensor networks: A top-down survey,‟‟ Comput. Netw., vol. 67, pp. 104–122. Jul 2014.

Yadav S, Yadav R S (2015), “A review on energy e f f i c i e n t p r o t o c o ls i n w i r e l e s s s e n s o r n e t w o r ks ” . Wireless Networks, 22(1), pp. 335-350 2015.

Halawani S, Khan A(2010), “Sensors Lifetime Enhancement Techniques in Wireless Sensor Networks - A Survey”, Journal of Computing, vol. 2, issue 5, pp. 34-47,May 2010.

Guo W, Yan C, Lu T(2019), “Optimizing the lifetime of wireless sensor networks via reinforcement-learning- based routing,” International Journal of Distributed Sensor Networks, vol. 15, no. 2, p. 1550147719833541, 2019

Sutton .R S, Barto A G(2014), “Reinforcement Learning: An Introduction”, II ed, The TMT Press, Cambridge, Massachusetts,2014.

Patel A B, Shah H B(2015), “Reinforcement Learning Fra me wo rk fo r En e rgy Effic ie n t W ire less Senso r Networks”, IRJET, Volume 2, Issue 2, pp. 1034-1040, May 2015

Watkins C J C H, Dayan (1992), “Q-learning”. Machine Learning, 3:279–292, 1992

Amjad N, Sandhu M M, Ahmed S H.et al(2013), “DREEM-ME: distributed regional energy efficient multi-hop routing protocol based on maximum energy with mobile sink in WSNs,” Journal of Basic and Applied Scientific Research, vol. 4, no. 1, pp. 289–306, 2013.

Renold A P , Chandrakala S(1996), “MRL-SCSO: mu lt i-a g en t re in fo rce me nt le a rn ing -bas ed s e lf- configuration and selfoptimization protocol for unattended wireless sensor networks”. Wirel Pers Commun 2017; 96: 5061–5079.

Förster A, Murphy A L(2007), "FROMS: Feedback routing for optimizing multiple sinks in WSN with reinforcement learning," in Proceedings 3rd International Conference on Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2007.

Oddi G, Pietrabissa A, Liberati F(2014) , "Energy balancing in multi-hop Wireless Sensor Networks: an a p p ro a c h b a s e d o n r e i n fo r c e me n t l e a rn i n g " , 2 0 1 4 NASA/ESA Conference on Adaptive Hardware and Systems(AHS),2014

Boyan J A, Littman M L(1993) “Packet Routing in dynamically changing network: a reinforcement – learning approach”. In: Proceedings of the international conference on neural information processing system, Denver, CO,,29 November-2 December 1993, pp.671- 678. New York:IEEE.

WangP,WangT(2006),“Adaptiveroutingforsensor n et wo rks us in g re in fo rc e men t le a rn ing ” . In : Proceedings of the IEEE international conference on computer and information technology, Seoul, South Korea, @0-22 September 2006, pp.219-224. New York: IEEE

Hu T, Fei Y(2010), “ QELAR: a machine learning- based adaptive routing protocol fro Energy efficient and lifetime-extended underwatersensornetworks”.IEEET mobile Comput 2010; 9(6): pp.796-809.

McCullock J, “A painless Q-learning Tutorial”, Accessed on Feb 02 2020 [Online], Available o n :h t t p s : / / mn e ms t u d i o .o rg / p a t h - f i n d in g -q - l e a rn i n g - tutorial.html.

Intanagonwiwat C, Govindan R., Estrin D, Heideman J, Silva F(2002), “Directed diffusion for wireless sensor networking,” IEEE/ACM Trans. Networking, vol. 11, pp.2–16,Feb.2002.

Samara K, Hosseini H(2016), “Aware Diffussion: A semi-holistic routing protocol for Wireless Sensor Network”, Wireless Sensor Network, Vol 8, pp.37- 49,2016.

Liu J, Li Y, Chen Q, Kuang Y, et al.(2007), “Energy a nd St o ra ge Effic ie n t Dire ct ed -Diffus ion fo r W i r e l e s s Se n s o r Ne t wo r ks : . I n P ro c . 2 0 0 7 InternationalConference on Wireless Commuications, Networkingand Mobile Computing, Proceedings - Volume 4.Sep.21-25,2007.


Refbacks

  • There are currently no refbacks.


------------------------------------------------------------------------------------------------------------------------

The ADBU Journal of Engineering Technology (AJET)" ISSN:2348-7305

This journal is published under the terms of the Creative Commons Attribution (CC-BY) (http://creativecommons.org/licenses/)

Number of Visitors to this Journal: