5 Comments
User's avatar
Oleksiy Nagornyy's avatar

BTW: A fascinating study—“Connectivity Changes Following Episodic Future Thinking in Alcohol Use Disorder”—offers a useful psychological angle on how to resist short-termism. The researchers didn’t train people to visualize goals, but simply invited them to dream about what they might be doing a year from now. This small mental shift—toward imagining future routines—was enough to reduce impulsive, self-sabotaging behaviors. Brain scans even showed stronger connectivity in areas linked to self-regulation and goal-relevant processing.

Importantly, this isn’t the same as the “visualize success” narrative that gets sold in pop-psych and hustle culture. It’s about visualizing the path, not just the outcome.

This echoes earlier findings too—like the 1999 UCLA study From Thought to Action (Pham & Taylor), where students who imagined the process of studying did far better than those who pictured getting an A. The “success-visualization” group actually performed the worst—feeling rewarded by fantasy, they lost drive in reality.

In that sense, even daydreaming becomes a kind of cognitive training. One that builds the same muscles we use to navigate complexity, delay gratification, and choose wisely. From dreaming, it’s just one small step to meditation—and that’s a well-documented rabbit hole of benefits.

And how many CEOs do we know who daydream or meditate?

Expand full comment
Oleksiy Nagornyy's avatar

I wouldn't say it's entirely the CEO's fault. Even if a company hires inexperienced employees, managers often immediately equip them with AI to increase productivity and shorten the onboarding process. Consequently, new starters increasingly rely on AI auto-suggestions at every step, which reduces their ability to solve problems independently and impairs their fundamental understanding. Thus, new starters turn into LLM operators, mindlessly accepting machine advice. This is as bad as not hiring inexperienced people at all.

To expand on your 'who cares?', I'd like to add that the gradual displacement of human intelligence is inevitable. This fits organically into the theory of genetic-cultural evolution — it doesn't care about the intelligence of individuals. The important thing is the growth of collective intelligence in planetary (and then universal) life. In the socio-cognitive networks of global intelligence, evolution doesn't care whether people or algorithms are more numerous.

Expand full comment
Jurgen Appelo's avatar

Good points. But nowadays, people workout for fun to regain the physical prowess that once came automatically through physical labor before the replacement by machinery. Who knows, maybe in the future we will exercise our brains for fun because society might still value smart people.

Expand full comment
Oleksiy Nagornyy's avatar

Speaking of the future, I agree with the perspective of Nobel Prize-winning scientist Geoffrey Hinton, often called the 'Godfather of AI', who has warned that many intellectual jobs are at risk of being replaced by AI, while manual trades such as plumbing may remain safe for years to come. In my opinion, people will switch to physical work in the future, as manual labour in some areas is still too expensive to be replaced by robots. They might be interested in science as a hobby, though. However, in an age of singularity, humans will no longer be the creators of meaning.

Expand full comment
Hari Dutt's avatar

Makes a lot of sense... The focus on value creation is paramount. And ensuring that value is long lasting determines the longevity of the company. Loved reading this unfiltered perspective

Expand full comment