Mostrar el registro sencillo del ítem

dc.contributor.author
Palombarini, Jorge Andrés  
dc.contributor.author
Martínez, Ernesto Carlos  
dc.date.available
2021-03-12T13:45:51Z  
dc.date.issued
2019-07  
dc.identifier.citation
Palombarini, Jorge Andrés; Martínez, Ernesto Carlos; Closed-loop rescheduling using deep reinforcement learning; Elsevier B.V.; IFAC-PapersOnLine; 52; 1; 7-2019; 231-236  
dc.identifier.uri
http://hdl.handle.net/11336/128207  
dc.description.abstract
Modern socio-technical manufacturing systems are characterized by high levels of variability which gives rise to poor predictability of environmental conditions at the shop-floor. Therefore, a closed-loop rescheduling strategy for handling unforeseen events and unplanned disturbances has become a key element for any real-time disruption management strategy in order to guarantee highly efficient production in uncertain and dynamic conditions. In this work, a real-time rescheduling task is modeled as a closed-loop control problem in which an artificial intelligent agent implements control knowledge generated offline using a schedule simulator to learn schedule repair policies directly from high-dimensional sensory inputs. The rescheduling control knowledge is stored in a deep Q-network, which is used closed-loop to select repair actions to achieve a small set of repaired goal states. The network is trained using the deep Q-learning algorithm with experience replay over a variety of simulated transitions between schedule states based on color-rich Gantt chart images and negligible prior knowledge as inputs. An industrial example is discussed to highlight that the proposed approach enables end-to-end learning of comprehensive rescheduling policies and encoding plant-specific knowledge that can be understood by human experts.  
dc.format
application/pdf  
dc.language.iso
eng  
dc.publisher
Elsevier B.V.  
dc.rights
info:eu-repo/semantics/restrictedAccess  
dc.rights.uri
https://creativecommons.org/licenses/by-nc-sa/2.5/ar/  
dc.subject
DEEP REINFORCEMENT LEARNING  
dc.subject
MANUFACTURING SYSTEMS  
dc.subject
REAL-TIME RESCHEDULING  
dc.subject
UNCERTAINTY  
dc.subject.classification
Ingeniería de Sistemas y Comunicaciones  
dc.subject.classification
Ingeniería Eléctrica, Ingeniería Electrónica e Ingeniería de la Información  
dc.subject.classification
INGENIERÍAS Y TECNOLOGÍAS  
dc.title
Closed-loop rescheduling using deep reinforcement learning  
dc.type
info:eu-repo/semantics/article  
dc.type
info:ar-repo/semantics/artículo  
dc.type
info:eu-repo/semantics/publishedVersion  
dc.date.updated
2021-03-05T18:52:58Z  
dc.identifier.eissn
2405-8963  
dc.journal.volume
52  
dc.journal.number
1  
dc.journal.pagination
231-236  
dc.journal.pais
Países Bajos  
dc.journal.ciudad
Amsterdam  
dc.description.fil
Fil: Palombarini, Jorge Andrés. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Córdoba. Centro de Investigaciones y Transferencia de Villa María. Universidad Nacional de Villa María. Centro de Investigaciones y Transferencia de Villa María; Argentina  
dc.description.fil
Fil: Martínez, Ernesto Carlos. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Desarrollo y Diseño. Universidad Tecnológica Nacional. Facultad Regional Santa Fe. Instituto de Desarrollo y Diseño; Argentina  
dc.journal.title
IFAC-PapersOnLine  
dc.relation.alternativeid
info:eu-repo/semantics/altIdentifier/url/https://linkinghub.elsevier.com/retrieve/pii/S2405896319301521  
dc.relation.alternativeid
info:eu-repo/semantics/altIdentifier/doi/http://dx.doi.org/10.1016/j.ifacol.2019.06.067