My Blog
Technology

The 2023 Good Tech Awards

The 2023 Good Tech Awards
The 2023 Good Tech Awards


In the tech industry, 2023 was a year of transformation.

Spurred by the success of last year’s breakout tech star, ChatGPT, Silicon Valley’s giants rushed to turn themselves into artificial intelligence companies, jamming generative A.I. features into their products and racing to build their own, more powerful A.I. models. They did so while navigating an uncertain tech economy, with layoffs and pivots galore, and while trying to keep their aging business models aloft.

Not everything went smoothly. There were misbehaving chatbots, crypto foibles and bank failures. And then in November, ChatGPT’s maker, OpenAI, melted down (and quickly reconstituted itself) over a failed boardroom coup, proving once and for all that there’s no such thing in tech as resting on your laurels.

Every December in my Good Tech Awards column, I try to neutralize my own negativity bias by highlighting a few lesser-known tech projects that struck me as beneficial. This year, as you’ll see, many of the awards involve artificial intelligence, but my goal was to sidestep the polarized debates about whether A.I. will destroy the world or save it and instead focus on the here and now. What is A.I. good for today? Whom is it helping? What kinds of important breakthroughs are already being made with A.I. as a catalyst?

As always, my award criteria are vague and subjective, and no actual trophies or prizes are involved. These are just small, personal blurbs of appreciation for a few tech projects I thought had real, obvious value to humanity in 2023.

Accessibility — the term for making tech products more usable by people with disabilities — has been an underappreciated area of improvement this year. Several recent advances in artificial intelligence — such as multimodal A.I. models that can interpret images and turn text into speech — have made it possible for tech companies to build new features for disabled users. This is, I’d argue, an unambiguously good use of A.I., and an area where people’s lives are already improving in meaningful ways.

I asked Steven Aquino, a freelance journalist who specializes in accessible tech, to recommend his top accessibility breakthroughs of 2023. He recommended Be My Eyes, a company that makes technology for people with impaired vision. In 2023, Be My Eyes announced a feature known as Be My AI, powered by OpenAI’s technology, that allows blind and low-sighted people to aim their smartphone camera at an object and have that object described for them in natural language.

Mr. Aquino also pointed me to Apple’s new Personal Voice feature, which is built into iOS 17 and uses A.I. voice-cloning technology to create a synthetic version of a user’s voice. The feature was designed for people who are at risk of losing their ability to speak, such as those with a recent diagnosis of amyotrophic lateral sclerosis or another degenerative disease, and gives them a way to preserve their speaking voice so that their friends, relatives and loved ones can hear from them long into the future.

I’ll throw in one more promising accessibility breakthrough: A research team at the University of Texas at Austin announced this year that it had used A.I. to develop a “noninvasive language decoder” that can translate thoughts into speech — read people’s minds, essentially. This kind of technology, which uses an A.I. language model to decode brain activity from fMRI scans, sounds like science fiction. But it could make it easier for people with speech loss or paralysis to communicate. And it doesn’t require putting an A.I. chip in your brain, which is an added bonus.

When CRISPR, the Nobel Prize-winning gene editing tool, broke into public consciousness a decade ago, doomsayers predicted that it might lead to a dystopian world of gene-edited “designer babies” and nightmare eugenics experiments. Instead, the technology has been allowing scientists to make steady progress toward treating a number of harrowing diseases.

In December, the Food and Drug Administration approved the first gene-editing therapy for humans — a treatment for sickle cell disease, called Exa-cel, that was jointly developed by Vertex Pharmaceuticals of Boston and CRISPR Therapeutics of Switzerland.

Exa-cel uses CRISPR to edit the gene responsible for sickle cell, a debilitating blood disease that affects roughly 100,000 Americans, most of whom are Black. While it’s still wildly expensive and difficult to administer, the treatment offers new hope to sickle cell patients who have access to it.

One of the most fun interviews I did on my podcast this year was with Brent Seales, a professor at the University of Kentucky who has spent the past two decades trying to decipher a set of ancient papyrus manuscripts known as the Herculaneum Scrolls. The scrolls, which belonged to a library owned by Julius Caesar’s father-in-law, were buried under a mountain of ash in 79 A.D. during the eruption of Mount Vesuvius. They were so thoroughly carbonized that they couldn’t be opened without ruining them.

Now, A.I. has made it possible to read these scrolls without opening them. And this year, Dr. Seales teamed up with two tech investors, Nat Friedman and Daniel Gross, to launch the Vesuvius Challenge — offering prizes of up to $1 million to anyone who successfully deciphers the scrolls.

The grand prize has still not been won. But the competition sparked a frenzy of interest from amateur history buffs, and this year a 21-year-old computer science student, Luke Farritor, won a $40,000 intermediate prize for deciphering a single word — “purple” — from one of the scrolls. I love the idea of using A.I. to unlock wisdom from the ancient past, and I love the public-minded spirit of this competition.

I spent a lot of time in 2023 being shuttled around San Francisco in self-driving cars. Robot taxis are a controversial technology — and there are still plenty of kinks to be worked out — but for the most part I buy the idea that self-driving cars will ultimately make our roads safer by replacing fallible, distracted human drivers with always-alert A.I. chauffeurs.

Cruise, one of the two companies that were giving robot taxi rides in San Francisco, has imploded in recent days, after one of its vehicles struck and dragged a woman who had been hit by another car. California regulators said the company had misled them about the incident; Cruise pulled its cars from the streets, and its chief executive, Kyle Vogt, stepped down.

But not all self-driving cars are created equal, and this year I was grateful for the comparatively slow, methodical approach taken by Cruise’s competitor, Waymo.

Waymo, which was spun out of Google in 2016, has been logging miles on public roads for more than a decade, and it shows. The half-dozen rides I took in Waymo cars this year felt safer and smoother than the Cruise rides I took. And Waymo’s safety data is compelling: According to a study the company conducted with Swiss Re, an insurance firm, in 3.8 million self-driving miles Waymo’s cars were significantly less likely to cause property damage than human-driven cars, and led to no bodily injury claims whatsoever.

I’ll put my cards on the table: I like self-driving cars, and I think society will be better off once they’re widespread. But they have to be safe, and Waymo’s slow-and-steady approach seems better suited to the task.

One of the more surprising — and, to my mind, heartening — tech trends of 2023 was seeing governments around the world get involved in trying to understand and regulate A.I.

But all that involvement requires work — and in the United States, a lot of that work has fallen to the National Institute of Standards and Technology, a small federal agency that was previously better known for things like making sure clocks and scales were properly calibrated.

The Biden administration’s executive order on artificial intelligence, released in October, designated NIST as one of the primary federal agencies responsible for keeping tabs on A.I. progress and mitigating its risks. The order directs the agency to develop ways of testing A.I. systems for safety, come up with exercises to help A.I. companies identify potentially harmful uses of their products, and produce research and guidelines for watermarking A.I.-generated content, among other things.

NIST, which employs about 3,400 people and has an annual budget of $1.24 billion, is tiny compared with other federal agencies doing critical safety work. (For scale: The Department of Homeland Security has an annual budget of nearly $100 billion.) But it’s important that the government build up its own A.I. capabilities to effectively regulate the advances being made by private-sector A.I. labs, and we’ll need to invest more in the work being done by NIST and other agencies in order to give ourselves a fighting chance.

And on that note: Happy holidays, and see you next year!

Related posts

Best Streaming Services for Documentaries

newsconquest

Best Speakers of 2023 – CNET

newsconquest

Here’s How Much Cheaper It Is to Use a Slow Cooker Than the Oven

newsconquest