Prediction is an unreachable dream for a lot of informatician and computiational lovers. Just imagine if you were able to guess which numbers will be picked next week for a super lottery !
On a more serious note, it exists different type of prediction. For example you can predict which team is going to win the superbowl based on a lot of statistics. So you can predict (with a more or less robustness) the great winner.
You can also predict the future image of a sequence, a film based on the previous images. You can predict if the stock exchance price will collapse in the following day. You can also predict if two proteins are likely to interact or the 3D structure of a protein thanks to its sequence. In short way there is a multitude of application in different fields.
But if you have a closer look, all these tasks share something (and it's the same for all predictions tasks) : you need training data. You need to learn your algorithm to predict whatever the task you want your algorithm to perform on some data.
Among the multitude way to train your algorithm, we can distinguish the two most used way : the supervised learning and its opposite the unsupervised learning. To be brief, the first one takes labeled data, try to predict the answer and then correct itself thanks to the known answer. In the second case, you don't have labeled data so you unlike supervised learning you can not have an objective evaluation of the accuracy.
It's not the case for all algorithms, but an important proportion of them are using discretized data. And that's the case for our two current choices.
Our literature search leads us to two algorithm. The first is Hidden Markov Model (HMM) which is the most used for speech recognition if you want to learn more about HMM, click here. The second is a kind of Recurennt Neural Network (RNN) called LSTM. If you want to learn more about it, click here.