Discussion about this post

User's avatar
Oleksiy Nagornyy's avatar

BTW: A fascinating study—“Connectivity Changes Following Episodic Future Thinking in Alcohol Use Disorder”—offers a useful psychological angle on how to resist short-termism. The researchers didn’t train people to visualize goals, but simply invited them to dream about what they might be doing a year from now. This small mental shift—toward imagining future routines—was enough to reduce impulsive, self-sabotaging behaviors. Brain scans even showed stronger connectivity in areas linked to self-regulation and goal-relevant processing.

Importantly, this isn’t the same as the “visualize success” narrative that gets sold in pop-psych and hustle culture. It’s about visualizing the path, not just the outcome.

This echoes earlier findings too—like the 1999 UCLA study From Thought to Action (Pham & Taylor), where students who imagined the process of studying did far better than those who pictured getting an A. The “success-visualization” group actually performed the worst—feeling rewarded by fantasy, they lost drive in reality.

In that sense, even daydreaming becomes a kind of cognitive training. One that builds the same muscles we use to navigate complexity, delay gratification, and choose wisely. From dreaming, it’s just one small step to meditation—and that’s a well-documented rabbit hole of benefits.

And how many CEOs do we know who daydream or meditate?

Expand full comment
Oleksiy Nagornyy's avatar

I wouldn't say it's entirely the CEO's fault. Even if a company hires inexperienced employees, managers often immediately equip them with AI to increase productivity and shorten the onboarding process. Consequently, new starters increasingly rely on AI auto-suggestions at every step, which reduces their ability to solve problems independently and impairs their fundamental understanding. Thus, new starters turn into LLM operators, mindlessly accepting machine advice. This is as bad as not hiring inexperienced people at all.

To expand on your 'who cares?', I'd like to add that the gradual displacement of human intelligence is inevitable. This fits organically into the theory of genetic-cultural evolution — it doesn't care about the intelligence of individuals. The important thing is the growth of collective intelligence in planetary (and then universal) life. In the socio-cognitive networks of global intelligence, evolution doesn't care whether people or algorithms are more numerous.

Expand full comment
3 more comments...

No posts