Real-time rescheduling of production systems using relational reinforcement learning


  • Jorge Palombarini GISIQ - UTN - Fac. Reg.
  • Ernesto Martinez INGAR (CONICET-UTN)


Learning, Rescheduling, Relational Modeling, Agile Manufacturing.


Most scheduling methodologies developed until now have laid down good theoretical foundations, but there is still the need for real-time rescheduling methods that can work effectively in disruption management. In this work, a novel approach for automatic generation of rescheduling knowledge using Relational Reinforcement Learning (RRL) is presented. Relational representations of schedule states and repair operators enable to encode in a compact way and use in real-time rescheduling knowledge learned through intensive simulations of state transitions. An industrial example where a current schedule must be repaired following the arrival of a new order is discussed using a prototype application –SmartGantt®- for interactive rescheduling in a reactive way. SmartGantt® demonstrates the advantages of resorting to RRL and abstract states for real-time rescheduling. A small number of training episodes are required to define a repair policy which can handle on the fly events such as order insertion, resource break-down, raw material delay or shortage and rush order arrivals using a sequence of  operators to achieve a selected goal.   10.13084/2175-8018.v03n06a09


ADHITYA, A.; SRINIVASAN, R.; KARIMI, I. A. Heuristic rescheduling of crude oil operations to manage abnormal supply chain events. AIChE Journal, v. 53, n. 2, p. 397-422 2007.

BLOCKEEL, H.; DE RAEDT, L. Top-down induction of first order logical decision trees. Artificial Intelligence, v. 101, n. 1-2, p. 285-297, 1998.

BLOCKEEL, H.; DE RAEDT, L.; JACOBS, N.; DEMOEN, B. Scaling up inductive logic programming by learning from interpretations. Data Mining and Knowledge Discovery v. 3, n. 1, p. 59-93, 1999.

CROONENBORGHS, T. Model-assisted approaches to relational reinforcement learning. Ph.D. Dissertation, Department of Computer Science, K. U. Leuven, Leuven, Belgium 2009.

DE RAEDT, L. Logical and relational learning. Springer-Verlag, Berlin, 2008.

DRIESSENS, K., DŽEROSKI, S. Integrating guidance into relational reinforcement learning. Machine Learning, v. 57, n. 3, p, 271-304, 2004.

DRIESSENS, K.; RAMON, J. Relational instance based regression for relational reinforcement learning. In: 20th International Conference on Machine Learning, 123, AAAI Press, Washington, 2003.

DRIESSENS, K.; RAMON, J.; BLOCKEEL, H. Speeding up Relational Reinforcement Learning through the use of an Incremental First Order Decision Tree Learner. In: DE RAEDT, L. and FLACH, P. (eds.) 13th European Conference on Machine Learning, vol. 2167, 97, Springer, Heidelberg (2001)

DRIESSENS, K.; RAMON, J.; GÄRTNER, T. Graph Kernels and Gaussian processes for relational reinforcement learning. Machine Learning, v. 64, n. 1-3, p. 91-119, 2006.

DŽEROSKI, S.; DE RAEDT, L.; DRIESSENS, K. Relational reinforcement learning. Machine Learning, v. 43, n. 1-2, p. 7-52, 2001.

GÄRTNER, T. Kernels for structured data. Series in Machine Perception and Artificial Intelligence, v. 72, World Scientific Publishing, Singapore, 2008.

Henning, G. Production scheduling in the process industries: current trends, emerging challenges and opportunities. Computer Aided Chemical Engineering, v. 27, p. 23-28 2009.

HENNING, G., CERDA, J. Knowledge-based predictive and reactive scheduling in industrial environments. Computers and Chemical Engineering, v. 24, n. 9, p. 2315-2338, 2000.

MARTINEZ, E. Solving batch process scheduling/planning tasks using reinforcement learning. Computers & Chemical Engineering, v. 23, 527-530, 1999.

MÉNDEZ, C.; CERDÁ, J.; HARJUNKOSKI, I.; GROSSMANN, I.; FAHL, M. State-of-the-art review of optimization methods for short-term scheduling of batch processes. Computers and Chemical Engineering, v. 30, n. 6, p. 913-946, 2006.

MIYASHITA, K. Learning scheduling control through reinforcements. International Transactions in Operational Research (Pergamon Press), v. 7, n. 2, p. 125-138, 2000.

MIYASHITA, K.; SYCARA, K. CABINS: a framework of knowledge acquisition and iterative revision for schedule improvement and iterative repair. Artificial Intelligence, v. 76, n. 1-2 p. 377-426, 1995.

MUSIER, R.; EVANS, L. An approximate method for the production scheduling of industrial batch processes with parallel units. Computers and Chemical Engineering, v. 13, n. 1-2, p. 229-238, 1989.

PALOMBARINI, J.; MARTÍNEZ, E. Learning to repair plans and schedules using a relational (deictic) representation. Brazilian Journal of Chemical Engineering, v. 27, n. 3, p. 413- 427, 2010.

SUTTON, R.; BARTO, A. Reinforcement learning: an introduction. MIT Press, Boston, 1998.

VAN OTTERLO, M. The logic of adaptive behavior: knowledge representation and algorithms for adaptive sequential decision making under uncertainty in first-order and relational domains. IOS Press, Amsterdam, 2009.

VIEIRA, G.; HERRMANN, J.; LIN, E. Rescheduling Manufacturing Systems: a Framework of Strategies, Policies and Methods. Journal of Scheduling, v. 6, n. 1, p. 39-62, 2003.

ZWEBEN, M.; DAVIS, E.; DOUN, B.; DEALE, M. Iterative repair of scheduling and rescheduling. IEEE Transactions On Systems, Man, And Cybernetics, v. 23, n. 6, p. 1588-1596, 1993.