If your prediction is very accurate, you are practically “see” the future. Maybe not the entire picture of the future, maybe only a fraction of it, but a future nevertheless.
Say you bought a plot of a land and you plan to build a house.
Since you have plenty of experience you can accurately predict how long will it take, what kind of problem that might happen, how to prevent the problem before it take place, what need to be checked, what to do if the result of the check a, b, or c, etc etc.
Fast forward months later, in the end the house are built exactly the way you predict it.
So you see it before it is being built. Predicting is seeing the future. Well technically seeing one possible version of the future is more accurate, but a future nevertheless.
Only if it is accurate, tho.
Now what bothers me: as far as i know, at the foundational level LLM (large language model, the model that is used on most AI product these days like ChatGPT) is a prediction machine. It predicts the next word or token in a sequence.
With this logic, LLM is basically making prediction and seeing a possible version of a future.
It is a future-seeing machine.
0 Comments