For people who have no computer experience, my typing speed already feels alienating. I don’t think regression will happen in that way, usually complex systems are already fragmented - people who know how to mine copper, for example, may not understand how a landline phone works. With AI we are shrinking those chains of production and compressing knowledge. Whether we end up knowing less but producing more, or knowing more but being able to produce less on our own, will depend on the direction we take. We either will know more and potentially capable of less, or will know less but potentially capable of more
Right now people usually give up understanding if they don't need it to operate, for instance - mobile phone, nobody knows how it works yet still capable to operate. Every performance around is an act of farming significance, as soon as significance can be farmed that automated way, it will be. Many of things that we are considering as disability or bad nurture will be improved/substituted using prosthetic thinking a form of AI, in that case it will be applied all the way, no limits, or resources are the limits.
While writing this I had another thought, - it feels like there will be two major groups naturally, - majority fully relied on AI assistance, and minority using AI as an application to something, and that something must be very new (unknown to AI), which it will be, but very short time because everything will be cycled through AI, and then sometime after singularity emerges, and it will be don't matter at all ?
This very insightful discussion makes me think of all the ways in which we stopped understanding our complex organizations even before AI. In Andy Grove's 1983 book High Output Management, he writes of the importance of constantly carving little windows into the "black box" that is our organizational system, in order to give ourselves a glimpse of some of the mysteries within and try to make sense of what we see through experimentation, etc. The Cynefin Framework also helps address this "sense-making" challenge.
So maybe in the AI era we need a more deliberate, amped-up, and aggressive "sense-making" approach and capability. I may never need to know how a cell phone works, or how AI images are generated, but I do need to know how to keep the systems that I am responsible for performing well, and how to adapt them when performance misses the target.
Yes, but the problems start when there's no way to understand the language the machines use among each other. There is no Cynefin domain called “Incomprehensible.”
True, but "Chaotic" might capture it adequately for our purposes here. For when we find ourselves in a situation we don't comprehend--that is, we see no pattern, no cause-effect relationships, etc.--the best we can do is probe-sense-respond in order to start seeing patterns and cause-effect relationships. So whether that which we cannot comprehend is AI language or anything else, it makes sense to try this.
Indeed, some acquaintances at Google told me maybe 10 years ago that no one at Google understands the search algorithm anymore, as it continues to morph itself in innumerable ways. But it's still very much in Google's interest to make sure it's performing well, and thus they have a dedicated function to constantly PSR it.
Chaotic is a measure of volatility. Incomprehensible/inscrutable is a measure of intricacy. They're different dimensions and the Wicked Framework visualizes that a lot better than Cynefin IMHO. Handling chaos versus intricacy may require different approaches.
If we're primarily concerned with how best to make sense of things that we inherently don't understand--such as outcomes of AI conversations taking place in an incomprehensible language--what approach would you (or the Wicked Framework, or any other cogent model) suggest? How would it differ from PSR, if at all, and why?
A lens I learned to use when applying any model or framework (all of which are wrong, some of which are useful!) is to make sure that we honor "the eye of the beholder." For example, I regularly help clients tackle situations that they perceive as chaotic or complex, but which actually follow a pattern I've seen many times. So if I can help them behold it the same way I do, then they can accelerate progress toward a solution using a completely different approach (e.g., no need to probe or even sense, just respond with a proven solution).
(The Wicked Framework, btw, strikes me as a bit more absolute...things are either static or changing rapidly, connected or isolated, etc. So if I perceive modularity as, say, loose when someone else perceives it as tight, what to do?)
Funny. I created the Wicked Framework precisely to make it NOT absolute, unlike certain other sensemaking frameworks. The whole point is that a system can have any combination of the 3x6 values. Eye of the beholder, indeed. The problem with Cynefin is that 99% of people interpret it as things being Complicted OR Complex. It invites a false dichotomy because many things are complicated AND complex.
Anyways, I have not yet elaborated on tactics for this model. I just recognize that volatility invites a different sensemaking approach than intricacy. And yes, the approaches can be mixed when things are perceived as both volatile and intricate.
For people who have no computer experience, my typing speed already feels alienating. I don’t think regression will happen in that way, usually complex systems are already fragmented - people who know how to mine copper, for example, may not understand how a landline phone works. With AI we are shrinking those chains of production and compressing knowledge. Whether we end up knowing less but producing more, or knowing more but being able to produce less on our own, will depend on the direction we take. We either will know more and potentially capable of less, or will know less but potentially capable of more
It always comes down to the same question: Where do we give up understanding in exchange for time and new opportunities?
Right now people usually give up understanding if they don't need it to operate, for instance - mobile phone, nobody knows how it works yet still capable to operate. Every performance around is an act of farming significance, as soon as significance can be farmed that automated way, it will be. Many of things that we are considering as disability or bad nurture will be improved/substituted using prosthetic thinking a form of AI, in that case it will be applied all the way, no limits, or resources are the limits.
While writing this I had another thought, - it feels like there will be two major groups naturally, - majority fully relied on AI assistance, and minority using AI as an application to something, and that something must be very new (unknown to AI), which it will be, but very short time because everything will be cycled through AI, and then sometime after singularity emerges, and it will be don't matter at all ?
This very insightful discussion makes me think of all the ways in which we stopped understanding our complex organizations even before AI. In Andy Grove's 1983 book High Output Management, he writes of the importance of constantly carving little windows into the "black box" that is our organizational system, in order to give ourselves a glimpse of some of the mysteries within and try to make sense of what we see through experimentation, etc. The Cynefin Framework also helps address this "sense-making" challenge.
So maybe in the AI era we need a more deliberate, amped-up, and aggressive "sense-making" approach and capability. I may never need to know how a cell phone works, or how AI images are generated, but I do need to know how to keep the systems that I am responsible for performing well, and how to adapt them when performance misses the target.
Yes, but the problems start when there's no way to understand the language the machines use among each other. There is no Cynefin domain called “Incomprehensible.”
True, but "Chaotic" might capture it adequately for our purposes here. For when we find ourselves in a situation we don't comprehend--that is, we see no pattern, no cause-effect relationships, etc.--the best we can do is probe-sense-respond in order to start seeing patterns and cause-effect relationships. So whether that which we cannot comprehend is AI language or anything else, it makes sense to try this.
Indeed, some acquaintances at Google told me maybe 10 years ago that no one at Google understands the search algorithm anymore, as it continues to morph itself in innumerable ways. But it's still very much in Google's interest to make sure it's performing well, and thus they have a dedicated function to constantly PSR it.
Chaotic is a measure of volatility. Incomprehensible/inscrutable is a measure of intricacy. They're different dimensions and the Wicked Framework visualizes that a lot better than Cynefin IMHO. Handling chaos versus intricacy may require different approaches.
https://substack.jurgenappelo.com/p/agicomplex-not-complicated
If we're primarily concerned with how best to make sense of things that we inherently don't understand--such as outcomes of AI conversations taking place in an incomprehensible language--what approach would you (or the Wicked Framework, or any other cogent model) suggest? How would it differ from PSR, if at all, and why?
A lens I learned to use when applying any model or framework (all of which are wrong, some of which are useful!) is to make sure that we honor "the eye of the beholder." For example, I regularly help clients tackle situations that they perceive as chaotic or complex, but which actually follow a pattern I've seen many times. So if I can help them behold it the same way I do, then they can accelerate progress toward a solution using a completely different approach (e.g., no need to probe or even sense, just respond with a proven solution).
(The Wicked Framework, btw, strikes me as a bit more absolute...things are either static or changing rapidly, connected or isolated, etc. So if I perceive modularity as, say, loose when someone else perceives it as tight, what to do?)
Funny. I created the Wicked Framework precisely to make it NOT absolute, unlike certain other sensemaking frameworks. The whole point is that a system can have any combination of the 3x6 values. Eye of the beholder, indeed. The problem with Cynefin is that 99% of people interpret it as things being Complicted OR Complex. It invites a false dichotomy because many things are complicated AND complex.
Anyways, I have not yet elaborated on tactics for this model. I just recognize that volatility invites a different sensemaking approach than intricacy. And yes, the approaches can be mixed when things are perceived as both volatile and intricate.