The "Architects of AI" were named Time's person of the year Thursday, with the magazine citing 2025 as when the potential of artificial intelligence "roared into view" with no turning back.
"For delivering the age of thinking machines, for wowing and worrying humanity, for transforming the present and transcending the possible, the Architects of AI are TIME's 2025 Person of the Year," Time said in a social media post.
The magazine was deliberate in selecting people — the "individuals who imagined, designed, and built AI" — rather than the technology itself, though there would have been some precedent for that.
"We've named not just individuals but also groups, more women than our founders could have imagined (though still not enough), and, on rare occasions, a concept: the endangered Earth, in 1988, or the personal computer, in 1982," editor-in-chief Sam JacobsÌýwroteÌýin an explanation of the choice. "The drama surrounding the selection of the PC over Apple's Steve Jobs later became the stuff of books and a movie."
People are also reading…
Time CEO Jessica Sibley, second from right, joined by OpenAI Chief Global Affairs Officer Chris Lehane, second left, rings the New York Stock Exchange opening bell Thursday for TIME's "Person of the Year."
One of the cover images resembling the "Lunch Atop a Skyscraper" photograph from the 1930s shows eight tech leaders sitting on the beam: Meta CEO Mark Zuckerberg, AMD CEO Lisa Su, Tesla CEO Elon Musk, Nvidia CEO Jensen Huang, OpenAI CEO Sam Altman, the CEO of Google's DeepMind division Demis Hassabis, Anthropic CEO Dario Amodei and AI pioneer Fei-Fei Li, who launched her own startup World Labs last year.
Another cover image shows scaffolding surrounding the giant letters "AI" made to look like computer componentry.
Five of the eight people selected — Musk, Zuckerberg, Huang, Altman and Su — are already billionaires with a collective fortune of $870 billion, based on the latest estimates compiled by Forbes magazine. Much of their wealth was accumulated during the past three years of AI fever.
It made sense for Time to anoint AI because 2025 was the year that it shifted from "a novel technology explored by early adopters to one where a critical mass of consumers see it as part of their mainstream lives," Thomas Husson, principal analyst at research firm Forrester, said by email.
The magazine noted AI company CEOs' attendance at President Donald Trump's inauguration this year at the Capitol as a herald for the prominence of the sector.
"This was the year when artificial intelligence's full potential roared into view, and when it became clear that there will be no turning back or opting out," Jacobs wrote.
Time CEO Jessica Sibley is interviewed Thursday on the floor of the New York Stock Exchange, adjacent to TIME's "Person of the Year" cover.
Some experts expressed caution over the AI boom and the race to develop increasingly powerful systems.
"Leading AI companies are working feverishly to replace humans in every facet of life, and they're not being shy about it," said Anthony Aguirre, executive director of the nonprofit Future of Life Institute, which works on AI safety issues. "The impact on our society could be catastrophic if there are no guardrails protecting what's human, and most important to us."
AI was a leading contender for the top slot, according to prediction markets, along with Huang and Altman. Pope Leo XIV, the first American pope whose election this year followed the death of Pope Francis, alsoÌýwas considered a contender, with Trump, Israeli Prime Minister Benjamin Netanyahu and New York Mayor-elect Zohran Mamdani topping lists as well.
After winning his second bid for the White House, Trump was named 2024's person of the year by the magazine, succeeding Taylor Swift, who was the 2023 person of the year.
The magazine was bought by Marc Benioff in 2018. Benioff, one of the co-founders of cloud-computing firm Salesforce, called AI "probably the most important" technological wave of his lifetime. He says he doesn't get involved in Time's editorial decisions.
The magazine's selection dates from 1927, when its editors picked the person they say most shaped headlines over the previous 12 months.
Businesses are increasingly turning to AI to ensure accessibility for people with disabilities. Is it working?
Businesses are increasingly turning to AI to ensure accessibility for people with disabilities. Is it working?
An worldwide experience significant disabilities, according to the World Health Organization. AI-powered tools are quickly becoming essential for creating more accessible workplaces, whether transcribing meetings and composing emails or describing images and converting voice to text.
In the United States, many of these innovations are helping companies meet the standards of the Americans with Disabilities Act of 1990, which requires employers to ensure accessibility in employment, transportation, public accommodations, and communication for employees.
However, the rapid rise of AI has also presented challenges, as many AI tools are far from perfect. Identified issues range from Slack messages deemed "robotic" to PDF summarization tools providing "completely incorrect answers" and AI programs introducing errrors when generating content. In a 2023 study from the University of Washington, researchers noted that generative AI could have a wide variety of uses related to improving accessibility, but found concerns around "."Ìý
To further understand what AI means for people with disabilities in the workplace, looked at how businesses are using AI-driven models to increase accessibility and what limitations they still face.

AI is changing many aspects of the worker experience
From recruitment and onboarding to daily tasks like meetings and emails, AI technologies are helping companies become more productive and efficient while offering greater accessibility to people with disabilities.
In hiring, tools like Workable and hireEZ claim AI technologies help source candidates by analyzing profiles and matching them to job descriptions, saving time and improving accuracy. Others, like Pymetrics, a startup founded by a neuroscientist, are experimenting with AI-driven games designed to reduce unconscious bias and assess a candidate's skills objectively. (The startup has since been acquired by recruitment company Harver.) If successful, these advancements could potentially offer fairer hiring processes.
However, it's important to note that the technology's imperfections require continual human oversight in hiring processes. According to ADA's guidance released in May 2022, employers using AI-powered hiring technology without adequate human review , including those with disabilities.Ìý
Companies are also providing a broader set of accessibility products for workers with disabilities individually. For employees with vision impairments, apps like Microsoft's Seeing AI and Be My Eyes provide audio descriptions to help workers listen to text, identify objects, and recognize faces. Employees who are hard of hearing can use tools such as Google's Live Transcribe app and audio word processor Descript, which provides transcription of spoken conversations. Tools like NaturalReader convert written documents into audio formats, making information more accessible.
Many of these tools fall into the category of universal design, which is a way of creating spaces that anyone can use. "Universal design is the design and composition of an environment so that it can be accessed, understood and used to the greatest extent possible by all people regardless of their age, size, ability or disability," . "This is not a special requirement for the benefit of only a minority of the population. It is a fundamental condition of good design."
AI tools can create more accessible experiences
One of the most important frontiers of accessibility has been online. By adhering to the Web Content Accessibility Guidelines issued by the World Wide Web Consortium, designers can create websites and web-based environments that can be accessed by everyone, regardless of ability.
In more practical terms, adhering to these standards will mean ensuring content has enough contrast so people with limited vision or colorblindness can read text. Adding "alt text" to images allows visual information to be shared with screen reader users, and captions on videos allow people who are deaf or hard of hearing to properly understand the information conveyed. It's also important to ensure people can navigate pages more easily.
New AI-driven tools are crucial for meeting these guidelines more efficiently and effectively. For instance, they can auto-generate captions, suggest alternative text for images, or flag insufficient contrast.Ìý
While these advancements make it easier for companies to comply with the guidelines, human oversight remains essential. For example, some of Google's AI-generated search result summaries have , which, when disseminated on websites, can misinform and harm users with disabilities. In 2023, researchers at Pennsylvania State University found that some AI models used to categorize large amounts of text . These models tend to classify sentences as negative or "toxic" based on the presence of disability-related terms without regard for the context.Ìý
To address these problems, experts emphasize the importance of involving the user community—including those with disabilities—in all stages of AI development.
"AI data systems that include representation of people with disabilities to minimize bias," the United Access Board, a governmental agency, advised during its 2024 Preliminary Findings on Artificial Intelligence. This should include a thorough evaluation of AI tools in the hiring process and for job-related activities "to identify potential discriminatory impacts on applicants and employees with disabilities."
The board also noted concerns about AI-powered surveillance tools known as "bossware technologies," which may not be correctly calibrated for employees with disabilities. This can be a problem if companies attempt to monitor things like employee fatigue or movement based on wearable technology that may not properly assess people with physical disabilities.
Realizing AI's potential hinges on acknowledging its limitations
Thousands of website owners have taken significant strides to meet accessibility standards since the 2010s, when the Americans with Disabilities Act required compliance for company domains. Yet as of 2023, , according to WebAIM.Ìý
As with any new technological breakthrough, the initial excitement—and overpromise—for AI-driven tools to tackle these persistent compliance issues has led to closer examinations of their true potential and limitations. While many industry experts agree that AI can offer scalable and relatively affordable solutions to meet compliance standards, solely relying on AI-powered solutions will not result in the outcome legislators and social advocates strive for: Fully inclusive online experiences for people with disabilities. AI tools have helped make workplaces and the internet more accessible, but they have shown business owners that human involvement remains essential.ÌýBut as more business owners implement more responsible oversight and inclusive design, unlocking AI's potential could mean that exponentially more workplaces and internet experiences become more accessible for all.
Story editing by Carren Jao. Additional editing by Elisa Huang. Copy editing by SofÃa JarrÃn. Photo selection by Lacy Kerrick.
Ìý
originally appeared on and was produced and distributed in partnership with Stacker Studio.
5 ways companies are incorporating AI ethics
5 ways companies are incorporating AI ethics
As more companies adopt generative artificial intelligence models, AI ethics is becoming increasingly important. Ethical guidelines to ensure the transparent, fair, and safe use of AI are evolving across industries, albeit slowly when compared to the fast-moving technology.Ìý
But thorny questions about equity and ethics may force companies to tap the brakes on development if they want to maintain consumer trust and buy-in.ÌýÌýÌý
found that about half of consumers think there is not sufficient regulation of generative AI right now. The lack of oversight tracks with limited trust that institutions—particularly tech companies and the federal government—will ethically develop and implement AI, according to KPMG.Ìý
Within the tech industry, ethical initiatives have been set back by a , according to an article presented at the 2023 ACM Conference on Fairness, Accountability, and Transparency. Layoffs at major corporations, including Amazon's streaming platform Twitch, Microsoft, Google, and X, hit hard, leaving a vacuum.
While nearly 3 in 4 consumers say they trust organizations using GenAI in daily operations, confidence in AI varies between industries and functions. Just over half of consumers trust AI to deliver educational resources and personalized recommendations, compared to less than a third who trust it for investment advice and self-driving cars. Consumers are open to AI-driven restaurant recommendations, but not, it seems with their money or their life.ÌýÌýÌýÌý
Clear concerns persist around the broader use of a technology that has elevated scams and deepfakes to a new level. The KPMG survey found that the biggest consumer concerns are the spread of misinformation, fake news, and biased content, as well as the proliferation of more sophisticated phishing scams and cybersecurity breaches. As AI grows more sophisticated, these concerns are likely to be amplified as more people may potentially be negatively affected—making ethical frameworks for approaching AI all the more essential.Ìý
That puts the onus to set ethical guardrails upon companies and lawmakers. In May 2024, Colorado became the first state to introduce with provisions for consumer protection and accountability from companies and developers introducing AI systems used in education, financial services, and other critical, high-risk industries.
As other states evaluate similar legislation for consumer and employee protections, companies especially possess the in-the-weeds insight to address high-risk situations specific to their businesses. While consumers have set a high bar for companies' responsible use of AI, the KPMG report also found that organizations can take concrete steps to garner and maintain public trust—education, clear communication and human oversight to catch errors, biases, or ethical concerns.
The reality is that the tension between proceeding cautiously to address ethical concerns and moving full speed ahead to capitalize on the competitive advantages of AI will continue to play out in the coming years.Ìý
analyzed current events to identify five ways companies are ethically incorporating artificial intelligence in the workplace.Ìý

Actively supporting a culture of ethical decision-making
AI initiatives within the financial services industry can speed up innovation, but companies need to take care in protecting the financial system and customer information from criminals. To that end, JPMorgan Chase has , including an ethics team to work on the company's AI initiatives. The company ranks top on the , which looks at banks' AI readiness, including a top ranking for transparency in the responsible use of AI.
Development of risk assessment frameworks
The National Institute of Standards and Technology has developed an that helps companies better plan and grow their AI initiatives. The approach supports companies in identifying the risks posed by AI, defining and measuring ethical activity, and implementing AI systems with fairness, reliability, and transparency.Ìý The Vatican is even getting in on the action—it collaborated with the Markkula Center for Applied Ethics at Santa Clara University, a Catholic college in Silicon Valley, to for companies to navigate AI technologies ethically.
Specialized training in responsible AI usage
Amazon Web Services has developed many tools and guides to help its employees think and act ethically as they develop AI applications. The , a YouTube series produced by AWS Machine Learning University, serves as an introductory course that covers fairness criteria and methods for mitigating bias. tool helps developers detect bias in AI model predictions.
Communication of AI mission and values
Companies that develop a mission statement around their AI practices clearly communicate their values and priorities to employees, customers, and other company stakeholders. Examples include Dell Technologies' and IBM's , which clarify their approach to AI application development and implementation, publicly setting guiding principles such as "respecting cultural norms, furthering social equality, and ensuring environmental sustainability."
Implementing an AI ethics board
Companies can create to help them find and fix the ethical risks around AI tools, particularly systems that produce biased output because they were trained with biased or discriminatory data. has had an AI Ethics Advisory Panel since 2018; it works on current ethical issues and looks ahead to identify potential future problems and solutions. Northeastern University has to work with companies that prefer not to create their own.
Story editing by Jeff Inglis. Additional editing by Alizah Salario. Copy editing by Paris Close. Photo selection by Clarese Moller.Ìý

