Relational Neural Machines
The Relational Neural Machines framework is a generakl neuro-symbolic methodology to inject logic knowledge into a machine learner.
The Relational Neural Machines framework is a neuro-symbolic methodology to inject logic knowledge into a learner as described in the orignal article:
- Relational neural machines, G. Marra, M. Diligenti, F. Giannini, M. Gori, and M. Maggini. In Proceedings of the European Conference on Artificial Intelligence (ECAI) 2020.
RNM
Introduction
The Relational Neural Machines (RNM) framework is a neuro-symbolic methodology to inject logic knowledge into a learner as described in the original article:
- Relational neural machines, G. Marra, M. Diligenti, F. Giannini, M. Gori, and M. Maggini.
In Proceedings of the European Conference on Artificial Intelligence (ECAI) 2020.
Description
This package provides a TensorFlow2 library, which allows to inject prior knowledge into any tensorflow-based learner.
The library defines:
- a declarative language to express the prior knowledge about a learning task;
- the RNM machine training and inference engine.
The software requirements are listed into the requirements.txt file of this package.
Provided Examples
The package provides two example of usage of the software library:
-
‘follows_rnm.py` : this script executes a small toy task to highlight the capability of RNMs to learn and employ soft rules that are holding only for a sub-portion of the whole dataset. The MNIST dataset contains images of handwritten digits, and this task assumes that additional relational logic knowledge is available to reason over the digits. In particular, given a certain subset of images, a binary predicate link between image pairs is considered. Given two images x, y, whose corresponding digits are denoted by i, j, a link between x and y is established if the second digit follows the first one, i.e. i = j + 1. However, it is assumed that the link predicate is noisy, therefore for i 6= j + 1, there is a given degree of probability that the link(x, y) is established anyway. The knowledge about the link predicate can be represented by the following FOL formula ∀x ∀y ∀i ∀j link(x, y) ∧ digit(x, i) ∧ digit(y, j) ⇒ i = j + 1. A neural network is trained and then used to process the input images, under the prior knowledge provided by the logic knowledge.
- `citeseer_rnm.py`: this script executes a neuro-symbolic classification task on the CiteSeer dataset, a collection of 3312 scientific papers, represented via its bag-of-words, and assigned to 6 exclusive classes. The papers connect to each other by a citation network which contains 4732 links. The domain knowledge used for this task state that connected papers p1, p2 tend to be about the same topic: ∀p1 ∀p2 ClassN(p1) ∧ Cite(p1, p2) → ClassN(p2)