329: "AI For The Rest of Us"

Download MP3
Arvid:

Earlier this week, Apple announced that it will introduce artificial intelligence into all of their devices, their operating systems, and all apps. As founders, we know the potential of AI and we're realistic about its limits, but how will consumers react when they have access to this tool on their phones? And what does that mean for those of us building websites and apps for them? I'm Avid Kahl and this is the Bootstrap Founder podcast. As I watched the latest Apple keynote from wdoubledc, one sentence really stood out to me.

Arvid:

AI for the rest of us. Apple is going to introduce artificial intelligence into their devices, their software, their operating systems, and all apps in a way that will make it ubiquitous within probably a few months, maybe even just a few weeks, once people start downloading and installing the latest beta version of iOS. The question is, how will that impact consumer expectations around AI? Will it be better or worse when it comes to us entrepreneurs and the products that we're building? Today, I'm going to explore the current state of consumer facing AI and how a major player introducing it into their hardware and software might influence that and how this affects solopreneurs and bootstrapped software businesses.

Arvid:

This episode is sponsored by acquire.com. The first big expectation will be that people are going to get used to the quirks of chat based interfaces and the limitations. I believe that. Absolutely. People are gonna be exposed to this more and they will see, it's not always great.

Arvid:

It's quirky. Kind of works, but not all the time. Most people using ChatGPT now, right now, maybe even over the last couple months or so before Apple started introducing it into the ecosystem, came to it via a recommendation from somebody else who also used ChatChippT. Usually, that stuff starts in a technical community and then makes its way into the marketing and HR and creative departments of the world. But it always came through this kind of chain of recommendations from technical people who initially, at least, understood the features, the complexity, and the limitations of the system, and would kind of pass that along to the people that I recommended it to.

Arvid:

And obviously, someone who does not understand these limitations will have heightened expectations, but there was always a way to set these kind of foundational basic baseline expectations for people. And for Chen GPT, over its development cycle, that also meant that it could slowly become better at them because people had this kinda skeptical yet slightly positive approach to it, but they knew that there were limitations and they were fine with them. I think this will be significantly different once Apple introduces free chat GPT usage to every single iPhone out there to the newest iOS update whenever that might come. Right? Couple couple months from now or even a year from now.

Arvid:

When that comes, things will change. Millions, if not billions of people will, for the first time, be exposed to the shakiness, the instability, and the imperfections of AI systems and their responses, no matter how good Apple is at implementing their own local LLM on their devices or even integrating with GPG and GPT 4, whatever successor may come in the next few months. At this scale, consumers will reliably experience that AI systems are still effectively gaslighting engines until they're not. Right? These things gaslight the people using them.

Arvid:

They confidently talk about things that these AI systems don't necessarily understand, and they will try to convince you that their answer is correct and And from those tokens come sentences, and from those sentences come paragraphs, and that's the answer. But it's always, what is the next most likely token? And for humans, that means, what is the next most expectable token, the next convincing token. So these LLM things, they pretty much gaslight us one token at a time. But I think that's fine, that's what they're for, That's what they create a believable reliable thing.

Arvid:

But is it reliable? And people will notice, probably not. It's not truly trustworthy. And I think this has a couple of consequences and will show these very clearly at scale. 1st, it might cause significant damage on occasion to individuals who will then stand out in their opposition to AI features.

Arvid:

A lot of people are gonna get burned by the answers, like, quite literally probably because somebody's gonna light their house on fire because the AI recommended some weird chemical solution to clean stuff up, but people will have damage that they don't expect to get from just having somebody answer a question. And second, it will also train us as a society that these models, even though they occasionally don't give a good response, will most likely always give some kind of usable response for us to continue working on. On. We will have tools that, like an unsafe weapon, they can go off on their own at any time and and deal lots of damage. But they're also tools that we understand are fallible, limited, and require a second opinion, but are useful when applied correctly.

Arvid:

And as these tools continue to amaze people who use them correctly and to confuse those who use them incorrectly, I think we will see a learning experience on a societal level. Depending on your generation and how open you are to new technologies, people will quickly or slowly learn that AI tools are really useful under supervision. Over the next couple of years, the sentiment that is already clearly established in the technical community will spill into the consumer community as well. Right now, people often get quickly disillusioned by the limitations of AI, but once AI is integrated into a system where it interacts with your calendar, your email, and voice messages, it becomes a transmitter and transitionary processor of information that makes it more dense and more actionable and easier to deal with, people will see AI as a necessarily and useful tool to facilitate and speed up interactions that they would generally want to have anyway. Right?

Arvid:

AI is not replacing stuff. AI is speeding things up. And they will move from seeing chat gpt as AI, like, AI is a thing you type something into and it comes up with clever responses, that archetype will shift into AI as a holistically integrated tool, a part of the operating system. It won't be CheggiPT anymore, it will just be AI, the agent, the thing that does stuff for me. And just like in the Windows and Mac worlds over the last decades, we have seen this development of quick search and tools like Alfred and Raycast which is set a command and then you can enter anything and any potential file, any application or any website will come up in this quick search, quick fetch mechanism, this stuff will be integrated into the software stacks all around us to just, you know, get to what you want quickly.

Arvid:

And we will see AI tools become that on an operating system level. People will not go to CheggiPT or to Anthropic Claude or wherever to do something anymore. They will have a standardized channel to ask any and many AI agents to either do something for them or fetch information that helps them do the thing themselves. And for AI to be truly effective, it will have to work seamlessly across different platforms and devices requiring standardizing protocols and ensuring compatibility. One of the bigger and more important developments in AI that I see is that we will likely see a living experiment with Apple connecting you to chat GPT and their own language models, which will receive updates every few months.

Arvid:

Like, it's gonna be a living breathing thing that constantly changes. They talked about this in the keynote a little bit, how their on device models and the server models are going to learn from you. They're not gonna use personal data to train, which is there's a pretty far fetched explanation because how are they gonna learn how you personally do things without looking into your personal data, but we'll see what they come up with. They will try to be better for you. The data that Apple collects on the device that is maybe not personal, but private, maybe the other way around, if you allow them to, they will use to train better models or they might even build an infrastructure on the device or in your network to train local language models for your own usage locally.

Arvid:

That would be really cool to see. Right? Where you don't even have to send that data anywhere. It will be used locally overnight or whatever to train a new model in the background. We might see training for specific personal purposes on the device where inference will happen on.

Arvid:

And over time, I think the cycle between training and inferencing will be expected for every kind of software that uses AI. Not just on your phone, not just on your computer, but inside the applications that you use. I think it's quite logical that if you have a text processor like Grammarly, I use it occasionally, which and this already has AI features, it will likely use an AI model trained exclusively on your writing style, which is gonna be very useful because then it will catch your common mistakes and then train you. It's funny. You train the model to train you to write better.

Arvid:

It's a very interesting dynamic here. It actually becomes an agent that has agency to impact and manipulate your behavior, if you want it. That's kinda a nice idea, I think. It comes with its own potential downfalls, obviously, but it is an interesting approach. And if these individual models can be trained on individual styles, you might even use the writing style models of other people if they choose to open these models to kinda jazz up your writing, to get a second perspective, to have maybe an alternative way of writing a particular thing in somebody else's voice just to see how you differ, where your arguments are different, where you argue differently.

Arvid:

It's a really interesting thing to see these local AIs as arbiters of the person they are trained on as kind of an extension of the style, how far we think and how we write and how we behave into an encapsulated virtual system. I find it very exciting. The implementation of AI, therefore, has to be accompanied by robust regulatory and legal frameworks that set standards for safety, privacy, and accountability. Because if these things are expressions of ourselves and extensions of ourselves, they have to be protected and safeguarded just as much as we as human beings are supposedly in the legal context. So if you use AI as a facilitator in your application, either as an assistant or a means for people to communicate either with each other or with the system itself, the expectations of the future will probably include this personal approach, a safe and regulated, but still personal approach for that assistant or communicator to work with you or in your stead, replacing you or interacting with you.

Arvid:

It will be trained on unique data to work on unique data. It's very interesting layering here. I think this is a long term expectation that will develop over the next decade or so, but you might wanna think about it from the start. Because if you will have to train some kind of virtual agent for the people using a product, well, what is it gonna do? What is its capacity gonna be?

Arvid:

What are the limitations that you wanna give it? What do you need to give it as a limitation? What data do you need to collect from today on to be able to reliably provide it with information that makes it useful, how do you need to protect it to facilitate training very personal edge specific models for your customers? How do you need to protect your customers from those models? Because you're they're effectively black boxes, so you don't really know what's going on in there.

Arvid:

Ethical AI is critical at this point. Developers have to ensure that AI systems are designed and trained to be fair, transparent, unbiased, and safe to use. Right? You have to have models. If you if you wanna have a biased model, you will train the biased model.

Arvid:

But the default should always be a model that is accessible and fair to the person using it. I think we will have this conversation about ethics and AI for the next decades to come. It probably will continue to exist for as long as we have AI as a concept in our lives. Because, you know, if you look at the science fiction movies like, what's what's that one called? I, Robot with Will Smith.

Arvid:

The the ethics of AI are the underlying of the whole movie, and they're kinda not resolved at the end of it. Like Asimov's 3 rules for robotics and that kind of stuff, those are considerations that we've been carrying with us since the sixties, fifties even. So these things have been around even before AI actually was around. And now that AI is in our reality, we will have to think about these things even more. So the whole conversation about bias and censorship and models, that is not gonna be easily dismissed or easily answered because it's gonna be an evolving process.

Arvid:

And if you use AI in your product, you are part not just of the conversation, but of the reality that impacts the conversation. So be careful of that. Be careful that the people using your product might use it as ammunition for this conversation. Either way, pro or con, doesn't really matter. And there will also be a perception of AI being this massive gateway into allowing a person to tell a system to do anything, to initiate almost anything by writing it as a human language centric instruction.

Arvid:

Chat ubt kinda paved the way here. Right? It was a just a chat window. You put anything you wanted in there and you got a response back. The more we see these things integrated into the systems that we use every day for minor tasks, the better they get at actually executing and reliably understanding correctly what we want from them, the more people expect this kind of behavior are integrated deeply into the system, and we will move from this kind of step by step point and click instruction stuff that we use right now.

Arvid:

Right? Open your calendar, click the plus, enter the date, enter the occasion, set if you wanna be alerted or something, that will move towards a much simpler instruction based thing. And I think Apple Calendar is a great example here because they've been already doing this over the last couple of years. In the Apple Calendar app on your iPhone or on your Mac computer, you can enter just a sentence saying, like, barbershop appointment next Wednesday at 4. You can just type that in.

Arvid:

And you hit enter and the system will automatically infer the date, the purpose, and whatever kind of alarm settings you might need. If you have an address or if it can pull up the address from a prior visit, and just tell you, like, you know, you need to start going, like, 30 minutes prior, start driving or whatever. This already exists in the pre LLM AI world, and people will expect this in almost every piece of software that they encounter. They will go into Adobe Photoshop and ask for the prompt would probably be, like, an image 800 by 600, half half black, half white, white text in the middle, and I can edit the text and a black and white photo of an old baroque main mention in the background. And they will enter and expect this whole thing to show up within seconds.

Arvid:

They're gonna go into Microsoft Word and say, draft an outline for an article on the Galapagos Islands. Make it 5 pages long about the the economy and ecology of those islands and how they interact and include several pictures sourced from Wikipedia with citations. There's gonna be a prompt, and they expect word to do this. Any word processor, they are gonna be expecting to be able to do this. And for us developers, well, we're gonna go into Versus Code or Vim or PHP Storm or whatever, and we're gonna say create the scaffolding for a REST API based software as a service product that has API key support, is built on Laravel, and allows people to upload PDF documents to s 3 and search them, like, semantically.

Arvid:

That is gonna be the prompt, and we expect something that will work out of the box within the next 2 or 3 minutes after every dependency is installed. And those complex initial commands will likely become more common, not just as specific commands, but as a concept that people go to first. And, you know, like, how we use Google when we wanna search something, when we wanna learn something, we just go to Google and search it. This concept of how to act on the Internet, how to use technology is gonna move to the prompt. I believe that it's already there for me.

Arvid:

Whenever I have a problem, I don't think about what should I search now. It's like, how can I tell any AI system, be it JetGPT or the local AI system that I have inside of my code editor to explore this problem and explain the solution to me as it explores it? I'm already going from Stack Overflow right into the AI systems because the results have been so much better and require less time for me to search it. So I will go there first. And I consider myself to be somewhere on the cutting edge edge of technology here.

Arvid:

So if I do this and a lot of other people in our community do this, the widespread adoption of AI will move prompting as the first response into the much bigger public domain here. And this doesn't mean you have to implement it in your software right now, but it's interesting to consider how you can allow your customers to prompt your software eventually to do things for them instead of having to click through your UI. There are web GPU tools already out there that allow inference or AI like actions almost instantaneously in the browser provided that a model was loaded. Like transcription between the microphone and your server, that already exists using OpenAI's Whisper. Like, you can start talking into your microphone and there are libraries out there that will immediately transcribe it and send over the text.

Arvid:

It's there already for those of us who are kinda adventurous to implement it and give it a year or so and it's gonna be an extremely common library for you to use and integrate into every single front end library of JavaScript system that you might use. Right? It's gonna be in Vue. It's gonna be in React. It's gonna be in Svelte.

Arvid:

It's gonna be in all these things just as a default because these front end frameworks will compete with who can adopt AI first. And and if this is the case in the technology, well then, what could you facilitate for your customers in your own business, in your own product? What could you give them that does something in seconds that would otherwise take them minutes if they were to do it themselves? The environmental impact of AI is gonna be a very big part of the discussion because everybody is gonna try and put it into their products. Right?

Arvid:

So we will need to think about this as well because, you know, energy consumption I'm like, for training these models, for sure, is a concern, but even inferencing on your computer is gonna use a little bit more of energy than having just a regular website. And if you start live transcriptions or if you do, like, video generation or the image generation, this is gonna increase wattage a little bit for every single user. But if you have 1,000, if not 1,000,000 of customers, that is a pretty massive impact on their energy bill in aggregate. So consider what this also means. Right?

Arvid:

Consider how you can put maybe offload it into your own service and charge for it, or give people the option to do it kinda as an optional feature, that would be fine. Just consider that there is environmental impact here. AI as a smart tool in software businesses built by Sotopreneurs and Bootstrappers is manageable in the browser already. Right? This it already exists and you can leverage it.

Arvid:

You could have all the actual inference happening in your browser itself at this point, provided that people download a really small model, maybe 50 or 200 megabytes in size. I think there are really tiny language models right now, that are really just like a 100, 170 some megabytes in size, but have like 2 to 4000000000 parameters that they've been trained on. It's pretty impressive. And if Apple brings their AI to the edge device, to your iPhone or iPad, you will eventually have access to their model through system APIs because they will expose it to as many apps as they can. And browsers will likely implement this soon, and I'm so excited for this.

Arvid:

It's likely that Chrome, Firefox, Safari, and all the others will have native JavaScript APIs to access the small but reliable local language models on the system. Not even something you have to download, something that is integrated into your operating system will be accessible through a JavaScript API in the browser. And it might require some kind of download initially, but once it's there, you can use it through these APIs. And it might be installed by default for new installations of the operating system in the future. The development of these small language models is extremely fast paced, and I think we will see better builds within the next few months of these things as well.

Arvid:

I've just followed this over the last couple weeks, and it's pretty impressive. Like, if your browser or your mobile system has immediate access to this technology, at at least for prompting your tool to do an action for your customer, would be extremely useful. Right? Any search feature that you have in your product can be extended to allow for these AI based searches, like the RAG searches on the data in your customer database, the prompts provided by your customers. All of a sudden, you don't have to present tables of data anymore.

Arvid:

You can pretty much have your customer ask a specific question, like, list all my database entries that, are related to customers in the United States, and it can automatically generate, HTML table with a certain style. I hope we find ways to have, like, our user interface show through these responses, but you can you can have all kinds of things. People can ask for graph plots and the AI system that you implement has the capacity to show graphs or images. Hey, it's it's wild what you could potentially have people prompt your system. And I think it's really cool.

Arvid:

Ultimately, I think we'll have to deal with AI expectations from normal users who've been exposed to this technology on their mobile devices, and they will expect it in their professional or consumer software sooner rather than later. I think we have a year or so until it's really widespread, but that's just a year, and you know how fast that goes by. Think about what you can do to facilitate training models on the edge. Right? For specific tasks that your users might have.

Arvid:

Think about it. You don't have to implement it today, but follow the news. Like, follow developments here and see where this goes. Like, who is implementing this? Who how are they doing it?

Arvid:

Why are they doing it? Are the results useful? Or are they just doing it because it's cool energy technology? Like, consider if you should and how you can enable your customers to prompt your software to do things for them instead of having to click through the UI, talked about this earlier. And think about what you can do in the background, like, on the server side or maybe in even on on a back end system that is completely decoupled from your actual application to facilitate processes using AI technology that were previously based on regular non AI augmented technology, human work, or just a long calculation that can now be kinda guesstimated by AI much more fast or much more reliably.

Arvid:

Building public trust in AI will always be essential here for its widespread adoption. So don't rush into things. Transparency in AI processes and how they're implemented and then clear communication about the benefits and limitations of AI will be the key to gaining user acceptance, understanding, and not have people complain about AI not working in your tool just as it's not working anywhere else. Be very specific about how you implement this and do it with your customers, like launch pilot projects, figure out how this can be useful without being overwhelming, and figure out what expectations are by just being constantly in contact with new users. Sometimes people will ask, well, how can I?

Arvid:

Or why can't I just? Those are the opportunities for you to talk to them about what they expected and why it triggered them to actually ask the question, what was missing? Always good to talk to your customers. Generally, a good idea. And I hope you will find time in between your own use of AI to ponder if it's really necessary for your business, and if it is, how you can implement it in a way that your customers expect.

Arvid:

And that's it for today. I wanna briefly thank my sponsor, acquire.com. Now imagine your founder who's built this really cool SaaS product for the last couple of years, happy customers, good revenue, a solid product, but now AI has entered the picture and you have no idea how to implement it. You're out of your depth. You have had a skill ceiling that you never expected to hit.

Arvid:

Competitors show up that seem much better prepared for this. Now is it time to just give up? Of course, not. The business that you built has a value for you and definitely for other people as well. People who might have what it takes to just step up to the plate and face the AI avalanche head on while you take a breather and look for something else to build or maybe for time to actually learn how to get into AI so you can use it for your next project.

Arvid:

And too often at this point, founders don't even think of selling their business, but rather than start the new side project and that leads to stagnation and ultimately losing a lot of uncaptured value in the original business. So if you find yourself here already or if you think your story is likely headed down a similar road, you can already feel it, I would consider a third option, and that's selling your business on acquire.com. Because capitalizing on the value of your time today is a pretty smart move. Acquire.com is free to list. They've helped hundreds of founders already.

Arvid:

So go to try.aquire.com/arvid and see for yourself if this is the right option for you. Thank you so much for listening to the Bootstrap Founder today. You can find me on Twitter at arvidkahl, arvidkahl. And you will find my books on my Twitter course there too. If you wanna support me in this show, you'll please subscribe to my YouTube channel, get the podcast in your podcast player of choice, and leave a rating and a review by going to rate this podcast.com/founder.

Arvid:

It makes a massive difference if you show up there because then the podcast will show up in other people's feeds and that will really help the show. Thank you so much for listening. Have a wonderful day and bye bye.

Creators and Guests

Arvid Kahl
Host
Arvid Kahl
Empowering founders with kindness. Building in Public. Sold my SaaS FeedbackPanda for life-changing $ in 2019, now sharing my journey & what I learned.
329: "AI For The Rest of Us"
Broadcast by