My Blog
Entrepreneur

What OpenAI CEO Sam Altman’s Congress Testimony Means For Entrepreneurs


Sam Altman testified before a Senate Judiciary subcommittee yesterday, following the resounding success of ChatGPT, his company’s large language model chatbot. Following in the footsteps of Jeff Bezos and Mark Zuckerberg, who have also been quizzed by lawmakers seeking to understand their companies, Altman’s three-hour hearing saw him swap his usual sweater and jeans for a suit and tie to answer questions on artificial intelligence and Chat GPT.

Joining Altman at the hearing were Christina Montgomery, chief privacy and trust officer for IBM and Gary Marcus, professor emeritus at New York University, who is a frequent critic of AI technology. The hearing was fast-paced and covered a lot of ground, but ended amicably. “You’ve performed a great service by being here today,” said Senator Richard Blumenthal, the subcommittee chair, to the three witnesses as they closed out the session.

Expected to be discussed were topics such as jobs, accuracy, misinformation, bias, privacy and copyright, all of which have huge implications for entrepreneurs looking to use AI in their work or to create AI tools. Those topics were discussed, but to one unanimous conclusion; artificial intelligence needs to be regulated. Everyone in the room agreed it should happen, but the details of who, how and when are yet to be decided.

Here are the five key points of Sam Altman’s congress testimony for entrepreneurs.

1. AI is not going anywhere

There’s no doubt that AI-ification is the next business revolution. It’s already started. Altman said during the testimony that, “people love this technology,” and Blumenthal added that the public “has been fascinated” with ChatGPT and other AI tools, the results of which are now, “no longer fantasies of science fiction.”

The subcommittee hearing brought up the open letter signed by major tech players including Elon Musk, calling for a pause on AI developments more advanced than the current version of ChatGPT. The three witnesses gave noncommittal answers on this pause, before Blumenthal stated that, “the world won’t wait,” and that, “sticking our head in the sand” is not the answer. No pause, no-nonsense, just solutions alongside progress, it seems.

There is no going back. The upside is too big. President Biden, who recently met with a group of AI company chief executives, told them, “what you’re doing has enormous potential and enormous danger.” Altman believes, “the benefits of the tools we have deployed so far vastly outweigh the risks,” but no one yet knows if AI is ultimately a force for good or evil.

The AI dilemma for entrepreneurs is whether they pivot now or wait it out. Do they replace their team and processes with AI tools, do they streamline and see how they incorporate the technology, or do they continue as normal without it? And what about creating the tools of the future? Do they give it a go, see what they can make, and potentially ride this wave, or do they assume AI will be stopped in its tracks, with a subsequent downgrading of life back to some time in early 2022? In reality, there’s only one option.

2. Copyright is a big issue

Of all of the issues that the expansion of AI has brought up, how to protect and credit the work of creators is one of the most topical for entrepreneurs. A creator who has written articles, courses, books and keynotes for a sustained period does not want their work being used to supplement the lazy content of others. And why would they? They see no benefit in their former effort and energy being used to train large language models.

Within the hearing, an example raised was the image of, “a purple elephant playing a trombone in the style of Van Gogh,” which image generators like DALL-E and Stable Diffusion can generate in seconds because the algorithms are trained on vast quantities of existing illustrations that are scraped from the web. These original works were used without consent, credit or compensation to the original artists. And something about that doesn’t seem right.

Altman said all the right things, that content owners should receive “significant upside benefit” when this happened, and that, “creators deserve control.” They should be able to prevent their copyrighted material, as well as their voice and likeness, being used to train AI models. In the future this might mean images state when they have been generated by AI, and article text too. But is this what most entrepreneurs want? And how realistic is AI crediting hundreds of sources for every picture?

On one hand, entrepreneurs want to be able to create catchy headlines, meta descriptions, Instagram captions and SEO-focused blog articles using AI tools, without having to state they came from AI. On the other hand, when they create original work, they want to have the credit, and they don’t want it regenerated and regurgitated by someone else’s ChatGPT prompt. They don’t want to have to manually opt out of their life’s work being used to train AI models over which they have no say or control.

The goals of everyone involved seem conflicted, meaning creators need to rethink their work and their commercial viability, potentially changing course in the process. The writing on the wall is that content is being devalued as a commodity. Those that win are those that commercialize that commodity in a lucrative way, or whose creations are so brilliant that the real deal comes at a premium. Picasso prints are everywhere, but the originals sell for millions. Will this be the same equation for every creator?

3. Regulation is coming

Everyone in Sam Altman’s Senate Judiciary subcommittee hearing agreed that regulation was required. Altman recommended, “an agency that ensures compliance,” and Marcus said he would introduce a, “nimble monitoring agency” that can monitor if the AI is being used responsibly. He would also commission more AI research. At previous senate hearings, technology leaders have insisted they regulate themselves, but this was a different message. Surely OpenAI knows legislation is on the horizon, and they want to be part of the discussion.

At the hearing, the lawmakers weren’t sure they were up to the enormous task of trying to regulate AI at a fast enough pace, stating that they didn’t necessarily have the capacity or resources to create such a department. Montgomery brought up the European Union’s, “first-ever legal framework on AI,” due to be voted on soon, which proposes a complete ban on facial recognition technology in public places, and further rules that govern AI technologies depending on its overall impact on a scale of minimal to unacceptable risk.

Presumably, compliance will mean forms, meetings and long waits for appointments. Even if voted through, the EU wouldn’t have its standards ready until late 2024. Where could the world be by then?

The benefits of legislation are unclear for entrepreneurs. On one hand, if regulations protect humanity, that can clearly only be a good thing. On the other hand, if alongside protecting humanity, regulations create high barriers to entry into specific markets, the tech giants once again win big and entrepreneurs are left being users of the tools, not creators. Big companies have entire departments for filling out paperwork and dealing with red-tape requirements, smaller ones less so. OpenAI’s biggest investor is Microsoft, they know what they are doing.

When regulation goes wrong it simply costs competition and creates monopolies. Plus, the track record of regulation enforcement by governments isn’t widely promising. “Dozens of privacy, speech and safety bills have failed over the past decade because of partisan bickering and fierce opposition by tech giants,” said technology writer Cecilia Kang. The lawmakers shuffle papers between their departments, the well-funded companies lobby against the laws, and the entrepreneurs carry on doing what they do.

4. Liability is complicated

If someone walks out in front of a driverless car, that driverless car has to make a choice. Does it stop to protect the pedestrian and risk harming the passengers, or does it opt to protect the people it carries? If that driverless car killed a person as a result of making that decision, where does the liability lie? Perhaps it’s with the pedestrian who should have looked. Or the passengers and car owners who should have overridden the technology. Maybe it’s with the car manufacturers, the people who trained the algorithm, or maybe we’ll see driverless cars themselves take the stands.

Raised in the hearing were issues along these lines. Who should be sued if an AI product causes harm? And while the US is famous for its legal-first thinking, with an estimated 40 million lawsuits filed in 2019, from population of 328 million, the issue is important. If there are no consequences for unscrupulous training of models or for how those models develop, multiply and become more powerful, how can the potentially harmful actions of AI models be stopped?

Section 320 of the United States Code states that social media platforms are held accountable for issues on their turf, but the equivalent for AI companies is not yet decided. While Altman said that AI, “is developed with democratic principles in mind,” he wants to collaborate with the US government to figure out the next steps, suggesting a “liability framework” around using AI technology. “We acted too slowly with social media,” added Marcus.

Entrepreneurs should exercise caution in the following ways. Firstly, get clued up on the tell-tale signs of AI-generated content, including voice cloning and image regeneration, so you’re not scammed yourself. Next, see your work, and the tools you create, through a liability lens. What could go wrong with what you’re creating? How could the AI you’re incorporating into your business processes potentially harm a client? Be part of the solution by innovating conscientiously within your own business, and share how you’re doing it so other business leaders follow suit.

5. The impact of AI will be profound

The subcommittee attendees agreed that AI technology has huge potential. Senator Josh Hawley’s opening statement said it’s, “transforming our world right before our very eyes”. Altman said OpenAI can help find solutions to “some of humanities biggest challenges, like climate change and curing cancer”. Although he stressed that the current systems are not yet capable, he reported how, “it has been immensely gratifying” to watch many people around the world get so much value from the tools.

“Tools” is a key word here. Altman was clear that ChatGPT is a tool for tasks, rather than jobs, in response to questions about technology replacing employees, which he had more to say about. Although AI is having a significant impact on jobs, and has been for years, projecting this forward to predict the future is difficult. Altman reminded the subcommittee attendees that as technology has grown in capability over its history, so has our quality of life. He said not only can, “current jobs get much better,” but he is, “very optimistic about how great the jobs in the future will be.” He believes OpenAI’s technology may destroy some jobs, but also create new ones. He passed the notion over to the US government, to “figure out how we want to mitigate that.”

From a business perspective, it’s clear that entrepreneurs are using AI to create impact of their own. “People love this technology,” reminded Altman. Product-market fit, right there. Business basics 101. If you took ChatGPT away, many entrepreneurs would be devastated. But AI is much more than chatbots and ChatGPT, and using it well can revolutionize a business. If AI is creating new jobs, entrepreneurs can hire those people. Prompt engineers, AI whisperers, Midjourney and ChatGPT specialists. AI consultants, to tell you how to incorporate AI into your processes. All exist, and all have skills that can be utilized by entrepreneurs.

Fighting against AI in certain arenas seems counterproductive. If a student can look up the answers to pass a test on ChatGPT, it’s not the student that needs to change, it’s the assessment process. If an entrepreneur can have a business idea, and use artificial intelligence to create a brand, a landing page, and test it within the hour, what could that mean for their commercial success?

Impact will be huge, there is no denying. The impact of not moving forward is that you don’t understand what is happening and what it might mean for your business. The impact of doing your research is that you’re clued up to spot opportunities and adapt at the right time. Although it might not be a case of adapt or die in the AI revolution, it can’t hurt to know the lay of the land and where you might fit in its future.

US senators are on the case with regulating AI and they are making plans on how to do it effectively. Other countries are scrambling teams to write the guidelines and set the bar for commercialzing AI and expanding its capabilities. While legislation catches up with reality, entrepreneurs can decide how they will move forward. AI is not going anywhere, copyright issues will keep cropping up, liability is complicated, but the impact is huge. The tools are in your hands and you can choose how to use them.

Related posts

How Barbara Corcoran Finds Up-And-Coming Real Estate Markets

newsconquest

3 Tips for Uncovering Trends and Opportunities with Social Media Listening

newsconquest

The Disillusionment Of Entrepreneurial Dreams

newsconquest

Leave a Comment