Our weird dreams may help us make sense of reality, AI-inspired theory suggests

Science

Products You May Like

There you are, sitting front row of Miss Ryan’s English class in your underwear, when in walks Chris Hemsworth holding a saxophone in one hand and a turtle in the other, asking you to play in his band.

“Why not?” you say, taking the turtle before snapping awake in a cold sweat, the darkness pressing in as you whisper to yourself, “…WTF?”

Decades – if not centuries – of psychological analysis have ventured to explain why it is our imaginations go on strange, unconstrained journeys while we sleep, with the general consensus being it has to do with processing experiences from our waking hours.

That’s all well and good, but seriously, do they have to be so … well, bizarre?

Neuroscientist Erik Hoel from Tufts University has taken inspiration from the way we teach neural networks to recognize patterns, arguing the very experience of dreaming is its own purpose, and its weirdness might be a feature, not a bug.

“There’s obviously an incredible number of theories of why we dream,” says Hoel.

“But I wanted to bring to attention a theory of dreams that takes dreaming itself very seriously – that says the experience of dreams is why you’re dreaming.”

Just as we might teach a child how to read, training a program to identify patterns in a human-like manner requires repeatedly running through scenarios that have certain things – like arrangements of letters – in common.

Computing engineers have found this repetition can help a program become exceptionally good at recognizing patterns of elements within the context of its training sets, at the risk of it struggling to apply the same process when the situation gets real outside the classroom.

This problem is referred to as overfitting, and it basically amounts to an inability to generalize under situations that contain unpredictable elements. Situations like those in the real world.

Fortunately, computer scientists have some fixes. One is to throw in more scenarios, just like giving a student more and more books to read. Sooner or later, the diversity in lessons will come to reflect the complexity of everyday life.

Another method introduces twists as a feature of the pattern being learned. By augmenting the data in some way (such as by reversing a symbol), a program is forced to deal with the fact patterns aren’t all going to look identical.

These fixes help improve the chances a program will cope with a wider variety of situations, but it’s impossible to come up with a lesson for every single possible event life might throw its way.

Perhaps the cleverest fix is referred to as dropout. Forcing the AI to ignore – or drop out – random features of a lesson gives it the tools to cope better with scenarios that include a few potentially confusing elements.

Realizing there is a similarity between overfitting fixes and things like Chris Hemsworth offering you a turtle in your dream, Hoel’s extended the fundamentals of dropout to our own brains to develop the “overfitted brain hypothesis”.

“If you look at the techniques that people use in regularization of deep learning, it’s often the case that those techniques bear some striking similarities to dreams,” says Hoel.

Keeping in mind it’s a hypothesis in want of a good testing, the fact we happen to dream of tasks we already perform repeatedly during the day could be better explained if our brains engaged in its own kind of dropout to prevent overfitting.

Hoel also cites the fact that loss of sleep – and with it, those strange dream states – still allows us to process knowledge, while making it harder to generalize what we’ve learned.

Although the very nature of dreaming makes any hypothesis on its purpose hard to test, experiments challenging the overfitted brain hypothesis would focus on variations in generalization rather than memorization.

If found to have merit, the hypothesis could guide the way to improving solutions to overfitting in AI, tweaking the timing and nature of dropouts or augmenting variables in ways to help the learning process generalize more efficiently.

“Life is boring sometimes,” says Hoel.

“Dreams are there to keep you from becoming too fitted to the model of the world.”

So take that turtle, tell Miss Ryan that you’re over J.D. Salinger, and go on the road with Chris’s band. Your brain will thank you for it when you wake up.

This research was published in Patterns.

Products You May Like

Articles You May Like

‘The Exorcist III’ Is Even Scarier Than the Original
Xbox Chief Says Console Hardware Isn’t Going Away
Gladiator II Bows To $87M At Overseas Box Office; Tops For Ridley Scott
“I want to add a different twist” Final Fantasy 7 Remake Part 3 will rethink how it implements mini-games, says director Naoki Hamaguchi
Everyone in Dragon Age: The Veilguard is too nice, and I can’t stand it