My central frustration with technology, and AI specifically, is that it’s rare to see a company clearly articulate specific problems and why their technology is the best bet to solve them.
I understand that at some level, human curiosity is simply insatiable. If we can, we will, unless the consequences are severe enough to overcome the curiosity. I also understand that at another level, the reasons for building anything and thrusting it upon the world are incredibly complex, and that this complexity is probably inherent to any drive to do anything. That is to say, it’s not necessarily just a tech problem.
I see it in music, for example, a space where the idea of having “problems to solve” doesn’t really evensense. An artist can make a slop album to satisfy a record label, or for some egotistical drive for relevance and power, but it is often an inert creation. It doesn’t materially impact people the way that technology can. The risk is that people get sucked into a fandom that harms them in some way, and this can be destructive, but it is very different than the kind of destruction we’ve seen technology wreak in the world.
Music cannot create the atom bomb.
It’s possible to make the case here that the effects of creation in different industries, like art, are just as potent, they just exert their influence in more subtle ways that only aggregate to real material consequences over decades, but that claim far too speculative, and is fundamentally unfalsifiable.
I digress. The point I’m trying to make is that while I am in no way a Luddite, I sympathize with and understand those who demand answers from those building technology.
So what questions should we be asking of technology, and what questions should we be asking ourselves in our relationship to it?
Question Set 1 - Got a problem?
There are two high-level categories I like to use to classify modern technology:
- Problem-Oriented
- Technology developed with the explicit goal to solve a scoped problem
- Utopian
- Technology developed with the implicit goal of inching society closer to some utopia
There is overlap, to be fair, and it’s not always so easy to place a new development in either bucket, but it’s a helpful low-res heuristic to begin analyzing why something exists.
A great example of a problem-oriented technology is Signal. There didn’t exist an ergonomic way to accessibly, securely, and privately message people across different device architectures, so Signal built one.
Claude, on the other hand, is a utopian technology. What problem does Claude solve? It may be used to solve a variety of problems, but it is not built with any scoped problem in mind. Another way to title the Utopian Bucket is as the Augmentation Bucket. Claude is built to augment the human mind.
It is right to be generally be wary of utopian technology, simply because we should be wary of adding things to our lives that isn’t strictly necessary to improve a specific function. However, the existence of the utopian bucket doesn’t exonerate all problem-oriented technology.
Question set 1 is therefore:
- What problem does this technology solve, if any?
- What are the externalities of that technology being created?
- What vision of utopia is this technology trying to propel us towards?
- Do I believe in this and align with that vision?
Question Set 2 - Incentives
- Do I believe in this and align with that vision?
It’s critical to understand the incentive alignment between the progenitors of technology and the users. The obvious example of this is social media. Instagram’s incentive is to get you using the app as much as possible. This is where they make money. Your incentives might be very different. Perhaps it’s the case your goal in using Instagram is to find an audience for your artwork. Is your incentive aligned with theirs? Probably not.
Understanding the incentives is usually pretty simple because we are scoping this discussion to technology built by companies whose legal goal is to benefit shareholders. Because of capitalism, you just need to understand what drives value and you have your answer for their side of the equation.
On a consumer level, it’s a bit more complicated. This is one of my gripes with the social media conversation generally. For the longest time, all of the discourse was about the dopamine hamster wheel of social media, but relatively lower attention was given to understanding the user incentive structure more deeply.
Question set 2 emerges from this:
- How does this technology make the company more valuable?
- What are the resulting incentive structures?
- What product changes will we expect to see to support these incentives?
- What are my own incentives for using this technology?
- Is this tech the best way for me to achieve those incentives?
- How aligned are my incentives with company’s?
Answering
Generally, I believe we should be skeptical of Utopia Bucket technology. If we decide to engage them, it’s important to not only understand the vision being fed to us, but whether this specific form is the best way to achieve it.
It may be the case that you generally align with the utopian vision, but following the actual consequences stemming from widespread use of the technology creates a very different world.
With respect to incentives, it’s obviously best to use technology that aligns with your incentive structures. Ableton is a fun example here. I love using Ableton, and I love the payment model they have. There are no subscriptions and no usage-based costs. You pay a single price to download a piece of software to make and record music. Given there is no marginal benefit to using the technology in any particular way once you download it, it exists as a true creative playground.
Ableton’s incentives are to sell more licenses. To do this, they have to build a product people are willing to pay a large single cost for. That’s a tall order in a subscription rampant software industry. I believe this makes the product better for artists.
There is a lot more to say on this topic and I’d like to explore it further but that’s all she wrote for today’s entry.