Home Neues Shopping Bibliothek Download Hilfe Support
Inhalt : Technical articles
Employing a neural network to solving the repetition spacing problem Tomasz Biedka
Bartosz Dreger
Jakub Kachlicki
Krzysztof Krawiec
Mikolaj Plazewski
Piotr Wierzejewski
Piotr Wozniak
Dec 12, 1997
Last update: Jul 23, 1998
Neural Network SuperMemo

It has long been postulated that neural networks might provide the most sound basis for solving the repetition spacing problem in SuperMemo. Due to their plasticity, neural networks might provide the most valid way to detect the user profile and classify individual knowledge elements into difficulty classes. The algebraic solution developed by Dr. Wozniak is partly heuristic and is based on a number of assumptions that not always have a solid theoretical basis. This does not change the fact that the Algorithm SM-8 as used in SuperMemo 8 seems to closely approached the maximum speed of learning that repetition spacing might produce. Read the text below to learn more about a research project run in the Institute of Computer Science at University of Technology in Poznan (headed by Prof. Roman Slowinski) in cooperation with SuperMemo Research (headed by Dr P.A. Wozniak), whose purpose is to verify the neural network approach and assess technical feasibility of neural network application in currently available version of SuperMemo. If successful, the research results will be applied in developing future versions of SuperMemo. Initially, as an alternative plug-in algorithm and later ... perhaps as the best repetition spacing algorithm available. We hereby invite comments on the presented approach by all interested parties

The repetition spacing problem consists in computing optimum inter-repetition intervals in the process of human learning. The intervals are computed for individual pieces of information (later called items) and for a given individual. The entire input data are grades obtained by the student in repetitions of items in the learning process. This problems has until now be most effectively solved by means of a successive series of algorithms known commercially as SuperMemo and developed by Dr. Wozniak at SuperMemo World, Poland. Wozniak’s model of memory used in developing the most recent version of the algorithm (Algorithm SM-8) cannot be considered as the ultimate algebraic description of human long-term memory. Most notably, the relationship between the complexity of the synaptic pattern and item difficulty is not well understood. More light on this relationship might be shed once a neural network is employed to provide adequate mapping between the memory status, grading and the item difficulty.

Using current state-of-the-art solutions, the technical feasibility of a neural network application in a real-time learning process seems to depend on the appropriate application of the understanding of the learning process to adequately define the problems that will be posed to the neural network. It would be impossible to expect the network to generate the solution upon receiving the input in the form of the history of grades given in the course of repetitions of thousands of items. The computational and space complexity of such an approach would naturally run well beyond the network’s ability to learn and respond in real time.

Using Wozniak’s model of two components of long-term memory we postulate that the following neural network solution might result in fast convergence and high repetition spacing accuracy.

The two memory variables needed to describe the state of a given engram are retrievability (R) and stability (S) of memory (Wozniak, Gorzelanczyk, Murakowski, 1995). The following equation relates R and S:

(1) R=e-k/S*t

where:

By using Eqn (1) we conclude about changes of retrievability in time at a given stability, as well as we can determine the optimum inter-repetition interval for given stability and given forgetting index.

The exact algebraic shape of the function that describes the change of stability upon a repetition is not known. However, experimental data indicate that stability usually increases from 1.3 to 3 times for properly timed repetitions and depends on item difficulty (the greater the difficulty the lower the increase). By providing the approximation of the optimum repetition spacing taken from experimental data as produced by optimization matrices of Algorithm SM-8, the neural network can be pre-trained to compute the stability function:

(2) Si+1=fs(R,Si,D,G)

where:

The stability function is the first function to be determined by the neural network. The second one is the item difficulty function with analogous input parameters:

(3) Di+1=fd(R,S,Di,G)

where:

Consequently, a neural network with four inputs (D,R,S and G) and two outputs (S and D) can be used to encapsulate the entire knowledge needed to compute inter-repetition intervals (see Implementation of the repetition spacing neural network).

The following approach will be taken in order to verify the feasibility of the aforementioned approach:

  1. Pretraining of the neural network will be done on the basis of approximated S and D functions derived from functions used in Algorithm SM-8 and experimental data collected thereof.
  2. Such a pretrained network will be implemented as a SuperMemo Plug-In DLL that will replace standard sm8opt.dll used by SuperMemo 8 for Windows. The teaching of the network will continue in a real learning process in alpha testing of the neural network DLL. A procedure designed specifically for the purpose of the experiment will be used to provide cumulative results and a resultant neural network. The procedure will use neural networks used in alpha testing for training the network that will take part in beta-testing. The alpha-testing networks will be fed with a matrix of input parameters and their output will be used in as training data for the resultant network.
  3. In the last step, beta-testing of the neural network will be open to all volunteers over the Internet directly from the SuperMemo Website. The volunteers will only be asked to submit their resultant networks for the final stage of the experiment in which the ultimate network will be developed. Again, the beta-testing networks will all be used to train the resultant network. Future users of neural network SuperMemo (if the project appears successful) will obtain a network with a fair understanding of the human memory and able to further refine its reactions to the interference of the learning process with day-to-day activities of a particular student and particular study material.
The major problem in all spacing algorithms is the delay between comparing the output of the function of optimum intervals with the result of applying a given inter-repetition interval in practise. On each repetition, the state of the network from the previous repetition must be remembered in order to generate the new state of the network. In practise, this equates to storing an enormous number of network states in-between repetitions.

Luckily, Wozniak’s model implies that functions S and D are time independent (interestingly, they are also likely to be user independent!); therefore, the following approach may be taken for simplifying the procedure:

Time moment
T1
T2
T3
Decision
I1

N1

O1=N1(I1)

I2

N2

O2=N2(I2)

I3

N3

O3=N3(I3)

Result of previous decision
 
O*1

E1=O*1-O1

O*2

E2=O*2-O2

Evaluation for teaching
 
O'1=N2(I1

E'1=O*1-O'1

O'2=N3(I2

E'2=O*2-O'2

Where:

The above approach requires only Ii-1 to be stored for each item between repetitions taking place at Ti-1 and Ti with substantial saving to the amount of data stored during the learning process (E'i is as valuable for training as Ei). This way the proposed solution is comparable for its space complexity with the Algorithm SM-8! Only one (current) state of the neural network has to be remembered throughout the process.

These are the present implementation assumptions for the discussed project: