In an AI-perfect world, it’s time to prove you’re human – Computerworld https://www.computerworld.com Making technology work for business Fri, 09 Jan 2026 07:00:00 +0000 http://backend.userland.com/rss092 Copyright (c) 2026 FoundryCo, Inc. en In an AI-perfect world, it’s time to prove you’re human Fri, 09 Jan 2026 07:00:00 +0000

Take everything you ever learned and practiced about business communication and throw it out the window. Because of the AI revolution, the world of “content” is now upside-down and inside-out. 

Until recently, you likely strove for perfection and polish in your slide presentations, emails, Slack messages, marketing images, social posts, blog posts, LinkedIn profile, video calls, resumes and cover letters. Doing so signaled value in the form of competence, experience, and ability. 

But now, communicating with perfection and polish signals a lack of value. It signals that you used AI. 

Speaking to Instagram influencers, Instagram chief Adam Mosseri last week announced the dawn of this new world. In posts on Instagram and Threads, he said that, “Deepfakes are getting better and better. AI is generating photographs and videos indistinguishable from captured media. The feeds are starting to fill up with synthetic everything.”

Here’s his main point: “AI makes polish cheap.” It’s “cheap to produce and boring to consume.

“People want content that feels real,” he wrote. “In a world where everything can be perfected, imperfection becomes a signal. Rawness isn’t just aesthetic preference anymore. It’s proof” that you’re offering authenticity, reality, value.

Mosseri was talking to online creators. But his insights go double for business professionals. 

Corporate communication is now being flooded with cheap, polished words and images. And if you communicate without AI, but in a polished way, others will assume it’s ChatGPT talking, not you. 

The people who, in Mosseri’s words, “can maintain trust and signal authenticity — by being real, transparent, and consistent — will stand out.”

A related trend is that as more communicators offload the thinking and creation to AI, the more homogenized, generic, and average everything becomes. 

Being yourself in all communication is not only about authenticity, but individuality. By communicating in a way that only you can communicate, you increase your appeal and value in a world of generic, faceless, zero-personality AI content

For marketing communications, this goes double. The public will increasingly assume what they see is AI-generated, and therefore cheap garbage. 

McDonald’s faced backlash last month after releasing an AI-generated Christmas commercial titled “The Worst Time of the Year.” Critics were not lovin’ it and slammed the video as “disjointed,” “uncanny,” and “stupefying.” McDonald’s was forced by the public backlash to remove the content, which damaged the company’s reputation.

In both November 2024 and November 2025, Coca-Cola used AI to recreate its old “Holidays Are Coming” commercials. The ads were slammed for being “soulless,” “dystopian,” and “devoid of any actual creativity.” Car blog Jalopnik pointed out that the truck in the ad had 10 different axle configurations during the 60-second commercial.

I believe that half the reaction was to the actual content quality, and half was simply the knowledge that these huge companies used AI, which is assumed to be cheap, fast, and easy. The move signaled they were just “phoning it in,” not offering any kind of creative thinking. 

Not only will the public reject what they assume to be AI, the social algorithms will increasingly reward and boost content offering the signals of authenticity. In fact, Mosseri said that within Meta there is a push to prioritize “original content” over “templated“ or “generic“ AI content that is easy to churn out at a massive scale. 

But by all means, use AI

ChatGPT went live on Nov. 30, 2022. In the ensuing three years, the public learned that LLM-based generative AI could magically do stuff for us so we didn’t have to do it. But now that everybody is magically doing stuff, stuff isn’t magic anymore.

This is the year we completely change our relationship to AI. 

Microsoft CEO Satya Nadella has some useful ideas about this, which he included in a blog post titled, “Looking Ahead to 2026.”

Rather than thinking of AI as a tool that replaces work and workers, we should think of it as a “scaffolding for human potential,” a way to magnify our cognitive capabilities, not replace them. 

In other words, instead of viewing AI as something that writes and creates pictures so we don’t have to or writes code so we don’t have to — meaning we don’t even have to learn how to code — we need to use AI to become great at writing, creating images and coding.

From now on, everyone will assume everyone else has and uses AI. Content and communications will always exist on a spectrum from fully AI-generated to zero-AI human communication. The further toward the human any bit of content gets, the more valuable it will feel to both the receivers of the content and to the gatekeepers. 

There’s no fooling anyone anymore. That’s why it’s powerful to transparently disclose how you used it. This kind of disclosure engenders trust and credibility. 

I’ll give you an example. For this column, I used three AI-based tools. One is a tool called MyMind, in which I have been taking notes about various things on this topic I’ve read over the past month or two. It uses AI to auto-index and tag content, so you can find it quickly

I used Gemini 3 Pro via Kagi Assistant (disclosure: my son works at Kagi) both as a kind of search engine and to find the details about where and when tech leaders expressed the ideas I captured in my MyMind notes.

And my word processor of choice is Lex, which has AI tools. After writing the column, I asked for advice on how I could improve it. I found a small number of its suggestions helpful and made some tweaks based on them. And I use it to spell-check and look for typos — that sort of thing. 

The truth is that AI has changed everything, including and especially our knowledge and expectations. 

We now live in a world where AI-generated content is cheap, easy, generic, boring and signals low value. 

Meanwhile, there’s only one you. It sounds mushy and cliché, but it’s increasingly true: the more you can show up as your authentic self, the more valuable you will be — and appear to be — to others. 

Be yourself. Express yourself. And forget about perfection and polish. 

]]>
https://www.computerworld.com/article/4114605/in-an-ai-perfect-world-its-time-to-prove-youre-human.html 4114605Artificial Intelligence, Careers, Generative AI, IT Skills and Training, Technology Industry
Global AI adoption is growing, and so is the digital divide Fri, 09 Jan 2026 04:31:56 +0000

Global adoption of AI in the second half of 2025 rose by 1.2 percentage points compared to the first half of the year, a report released Thursday by the Microsoft AI Economy Institute (AIEI) indicates.

According to the findings from the Microsoft think tank, whose principle mandate is to shape what it calls an inclusive, trustworthy AI economy, despite one person in six now using generative AI (genAI) tools, there exists what it described as a “widening divide.”

The adoption rate in nations that are situated in the region known as the Global North, a term used for developed nations regardless of their geographic location, is at 24.7% of the working age population, far higher than the 14.1% figure in the Global South, those countries either under development or least developed.

Other key findings revealed that:

  • Nations that invested early in digital infrastructure, AI skilling, and government adoption, such as the United Arab Emirates, Singapore, Norway, Ireland, France, and Spain, continue to lead.
  • The top 10 nations with the largest increases in AI adoption share are all high-income economies.
  • While the US is the leader in both AI infrastructure and frontier model development, it fell from 23rd to 24th place in AI usage by its working age population, with a 28.3% usage rate. It lags far behind smaller, more highly digitized and AI-focused economies such as Ireland (44%), New Zealand (40.5%), Belgium (36%), and Canada (35%). South Korea (30.7%) led the world in growth, with usage surging by almost 5% in the second half of the year..
  • A parallel development that reshaped the global landscape was the rapid rise of DeepSeek. Its success, the AIEI contends, reflects growing Chinese momentum across Africa, which is a trend that could continue in 2026.

To track the overall global increase of 1.2%, the executive summary stated, “we measure AI diffusion as the share of people worldwide who have used a genAI product during the reporting period. This measure is derived from aggregated and anonymized Microsoft telemetry and then adjusted to reflect differences in OS and device market share, internet penetration, and country population.”

Methodology limitations

Brian Jackson, principal research director at Info-Tech Research Group, said, “one thing worth noting is that when they say Microsoft telemetry, what they mean is they have data on what some Windows users are doing (those who agree to share their data with Microsoft), and then they are making some adjustments to try to account for AI use on mobile platforms. So, if a user with an Android phone or iOS device is using ChatGPT, Microsoft isn’t capturing that, but at least they try to acknowledge that through some sort of methodology.”

Jackson pointed out that the researchers address these shortcomings in their methodology in a separate document that is part of  the report’s references list, saying, “Nevertheless, our methodology carries some limitations. Because our metric originates with Microsoft telemetry, it is inherently biased toward desktop platforms and the Microsoft user demographic.”

They go on to say, “although we apply rigorous adjustments and scaling factors, our results implicitly assume that user behavior in Microsoft products approximates that in other platforms, which may not always hold true. Future iterations of this research could mitigate this limitation by integrating data from mobile app analytics providers such as Sensor Tower or leveraging web traffic analytics from tools like Semrush or SimilarWeb.”  

However, Jackson noted, “aside from that methodology quirk, the findings are generally positive for genAI firms because the usage is up across the board. That means there’s still a growing appetite to at least try these tools. The researchers mostly want to draw attention to how AI use will contribute to the growing digital divide. They conclude that internet access is a big limiter of AI adoption in developing countries, but there is a big demand for it.”

Sanchit Vir Gogia, chief analyst at Greyhound Research, added, “the biggest mistake in reading these adoption numbers is assuming generative AI is a single behavior that everyone moves through in the same way. It isn’t. What we’re seeing instead is a split.”

Usage becoming more intentional

Some people, he said, “try AI out of curiosity and move on. Others keep it for a small number of tasks where it clearly helps, like drafting, analysis, coding, or summarizing. And in many organizations, AI is no longer something people consciously ‘use’ at all. It’s being built into systems, platforms, and workflows, quietly doing work in the background. When you collapse all of that into one metric, the data naturally looks confusing.”

Gogia said that, yes, some early experimentation does drop off, and that’s completely normal. “But that doesn’t mean people are abandoning AI,” he said. “What’s really happening is that usage becomes more intentional. Once someone restructures how they work around AI assistance, that habit tends to stick.”

Individuals, he explained, “may open fewer tools and spend less time prompting, but the work itself has changed [in that] genAI doesn’t behave like a consumer social app that needs constant engagement. It behaves more like infrastructure. Its value comes from replacing steps, not grabbing attention. When a step disappears, so does visible usage, even though dependence increases.”

This trend, said Gogia, “[also] helps explain why many developed economies look surprisingly weak on first-use measures. These markets aren’t falling behind. They’re actually further along in absorption. In digitally mature environments, AI increasingly arrives as an upgrade or a default feature, not as a shiny new tool you actively opt into.”

People inherit the capability, he said, rather than consciously adopting it, so they under-report its usage. But at the same time, he noted, “governance moves slowly. Legal review, procurement, and risk assessments delay official rollout, but behavior doesn’t wait. Employees experiment quietly, teams prototype locally, and real adoption builds long before institutions catch up.”

Inside enterprises, “the clearest signal that AI is sticking is what happens when it’s taken away,” Gogia observed. “In organizations that have pulled back AI after pilot phases, teams consistently report slower work, more friction, and real frustration. That reaction matters. It tells you AI has crossed from experimentation into reliance.”

Budgets tell the same story. “GenAI is no longer fighting for innovation funding,” he said. “It’s being folded into operating spend, security planning, and governance models. Those are the kinds of conversations organizations only have when a capability is becoming unavoidable.”

]]>
https://www.computerworld.com/article/4114739/global-ai-adoption-is-growing-and-so-is-the-digital-divide.html 4114739Artificial Intelligence, Generative AI
Enterprises still aren’t getting IAM right Fri, 09 Jan 2026 04:18:19 +0000

Despite all the warnings, and constant news of devastating cyberattacks, enterprise users are still cutting corners when it comes to identity and access management (IAM).

Nearly two-thirds (63%) of cybersecurity leaders admit their employees continue to bypass security controls so they can work faster, according to new research by security company CyberArk. Furthermore, enterprises are struggling to establish access policies for emerging AI agents and other agentic tools.

This seems to strongly implicate identity and privilege control as central to operational risk.

“The data points to a cultural pattern where immediate productivity wins often outweigh long‑term security posture,” said  Charles Chu, GM of IT and developer solutions at CyberArk. “It is clear that security is still perceived as something that slows people down.”

Privileged access management inadequate

CyberArk surveyed 500 leaders involved in privileged access management (PAM) in identity and infrastructure roles, including DevOps engineers, security managers, cloud security architects, database managers, site reliability and software engineers, and IT support specialists.

They report that in their organizations:

  • Just 1% have fully implemented a modern just-in-time (JIT) privileged access model;
  • 91% say at least half of their privileged access is always-on (standard privilege), providing unrestricted, persistent access to sensitive systems;
  • 45% apply the same privileged access controls to human and AI identities;
  • 33% lack clear AI access policies.

The research also revealed a growing issue with “shadow privilege,” accounts and secrets that are unmanaged, unnecessary, and unknown to cybersecurity leaders. CyberArk found that 54% of organizations uncover these types of accounts and secrets every week.

This suggests that access ownership is “diffuse,” Chu noted. “If no one feels responsible for continuously pruning and governing privileged access, it naturally accumulates. Added to that is the fact that the majority of organizations (88%) manage multiple identity tools, which “creates confusion about who has authority and which system is the source of truth.”

The riskiest human behaviors

CyberArk identified several of the riskiest human behaviors in access management, including:

  • Copying credentials into personal password managers, chat apps, or email, because the “official” process is slower.
  • Spinning up cloud resources or test environments with privileged access outside central controls.
  • Using shared admin accounts or recycling similar passwords/tokens across systems and environments.
  • Leaving always-on access in place “just in case,” even when those elevated privileges are only required occasionally.

“Employees bypass controls for very human reasons,” Chu acknowledged. “They’re under pressure to move fast, and the security tools that they are required to use are often not user-friendly and conflict with how they actually get work done.”

This leads to ad‑hoc local admin creation, and long‑lived IAM roles and API keys that “no one revisits.”

AI is only exacerbating the problems. Users paste keys, logs, or configuration files into AI tools, unintentionally exposing secrets, Chu noted. AI can also deploy apps and alter systems faster than existing controls can keep up, so engineers tend to work around the controls. Further, AI systems and agents are increasingly acting on behalf of users in ways not yet fully visible to security teams. This makes risky shortcuts even more difficult to detect.

“The net effect is that the gap between what the policy says and what actually happens in production is widening,” said Chu.

Give AI agents unique identities

The bottom line: AI agents operate quite differently than human users. As well being speedier, they work continuously and touch multiple systems and data sets in a single workflow. They present a unique risk because they can very quickly execute large numbers of privileged actions.

With this in mind, security teams should treat AI agents as distinct identities with their own access controls, Chu advised. Every individual agent should be assigned a dedicated identity and credentials, with tightly-scoped permissions for specific systems and data sets. Short-lived tokens should take the place of long-lived keys, and elevated rights should only be granted just in time, and for specific tasks. Further, all actions taken by AI agents should be logged and attributable.

Just as with humans, reduced standing access, better visibility, and strong governance must be “applied explicitly and consistently” to AI, Chu noted.

JIT is hard to implement

JIT is a technique that grants select permissions only when required, for a specific purpose, and for a limited period of time. When users or systems request access, they receive a “time-bound and scope-limited” set of privileges, allowing them perform the required task, then automatically “return to a lower baseline.” Chu explained.

“Every step is logged so that organizations can see who or what has powerful access and why,” he said.

But JIT remains difficult to realize in practice, Chu noted, resulting in a heavy reliance on standing privileges, even as enterprises are fully aware of how risky that practice is.

A number of factors are to blame, he said: IT teams can be hesitant to make changes to legacy systems for fear of disruption, and complex IT environments comprising on-premises infrastructure, multiple clouds, and SaaS applications can complicate implementation. Some teams also worry that JIT can slow down incident response or other routine practices.

Adding to the challenges, existing cybersecurity tools haven’t been designed for highly complex enterprise environments, Chu said. “That combination points to fragmentation: There is plenty of tooling, but not enough unified visibility and control.” .

How enterprises can protect themselves

Today’s enterprises need security that is built around centralized identity, least privilege, and automation, Chu emphasized. This means strong single sign‑on (SSO) with multi‑factor authentication (MFA) and contextual policies; modern secret management for passwords, keys, and tokens for both humans and machines; privileged access capabilities that can issue short‑lived access on demand with full logging; and analytics that stitch together activity across human accounts, service accounts, and AI agents.

From a cultural perspective, organizations should establish clearer ownership of identity and privilege management, shared goals, and top-down messaging around cybersecurity practices, he said.

Also, critically, organizations must adopt tools that easily integrate into existing processes and workflows, thus reducing friction and reducing user workarounds. “The key to effective implementation is to make security as invisible as possible to the user as they do their daily work,” Chu asserted.

]]>
https://www.computerworld.com/article/4114749/enterprises-still-arent-getting-iam-right.html 4114749Artificial Intelligence, Identity and Access Management, Security
Microsoft scraps planned email limits for Exchange Online Thu, 08 Jan 2026 19:47:38 +0000

Microsoft has backed away from its plan to introduce a limit of 2,000 emails per day in Exchange Online. The change, announced in 2024 and set to go into effect last year, was aimed at reducing the amount of online spam. But the limitation was met with fierce criticism from business users, and it is now clear that Microsoft has shelved the idea.

“Customers have shared that this limitation creates significant operational challenges. Your feedback is important, and we are committed to solutions that balance security and usability without causing unnecessary disruption,” Microsoft officials wrote on the Exchange Team blog.

Microsoft will now look for alternative solutions to combat spam and abuse of Exchange Online, according to Bleeping Computer.

]]>
https://www.computerworld.com/article/4114627/microsoft-scraps-criticized-change-in-exchange-online.html 4114627Collaboration Software, Email Clients, Microsoft, Microsoft Exchange, Productivity Software, Security
CES 2026: AI compute sees a shift from training to inference Thu, 08 Jan 2026 18:27:43 +0000

LAS VEGAS — Not so long ago — last year, let’s say — tech industry spending was all about the big AI companies spending billions of dollars on training ever-larger frontier AI models.

That’s rapidly changing, which is why at this year’s CES, AI inference was at the heart of the show’s keynote speeches and major announcements. (Inference is when AI models move from training to using the information they have to handle new, previously unseen data.)

Until recently, according to Lenovo CEO Yuanqing Yang, most AI spending was related to training. Approximately 80% went to creating the large language models (LLMs) that underpin generative AI, he said, with the remaining 20% to the inference side.

That is starting to change. “In the future, those numbers are reversed,” he told reporters Wednesday at CES. “Eighty percent will be on the inference and 20% will be on training. That is our forecast.”

And that’s why Lenovo launched three new inference servers on Tuesday, he said. “We definitely want to lead the trend.”

According to industry experts, that shift is already under way. In a November report, Deloitte estimated that inference workloads accounted for half of all AI compute in 2025 — a figure that will jump to two-thirds in 2026. 

The actual infrastructure spending lags a little bit, Lenovo’s Ashley Gorakhpurwalla, executive vice president and president of infrastructure solutions group, told Computerworld. “When you train foundational models, you start big, and you put all the capital in up front,” he said.

But when enterprises deploy AI, such as a chatbot, they start small and slowly scale up. “People deploy, iterate, and move forward.,” Gorakhpurwalla said. “When you first deploy a chatbot, it’s a small expense.”

However, even on the spending side, 2026 will be a big inflection year for inference, according to a December report by the Futurum Group. “We’re seeing a clear shift,” Futurum analyst Nick Patience said in the report. “Inference workloads are set to overtake training revenue by 2026.”

Enterprises are moving from experimentation to deployment, boosting the demand for AI inference servers, and are also increasing hybrid and edge deployments.

That’s the rationale for Lenovo’s decision to launch three new inferencing servers at CES this week.

The servers include the Lenovo ThinkSystem SR675i, designed to run full-sized LLMs for applications in areas like manufacturing, healthcare and financial services; the Lenovo ThinkSystem SR650i, which is designed to be scalable and easy to deploy in existing data centers; and the Lenovo ThinkEdge SE455i, a compact server built for retail, telco and industrial environments.

This isn’t Lenovo’s first foray into inferencing services, or even into small-scale inferencing servers. It released its first entry-level AI inferencing server for edge AI in March 2025.

The company also offered other servers capable of handling AI workloads that weren’t specifically marketed as inferencing servers.

There are three main drivers for enterprises looking to buy and deploy their own inference servers, said Arthur Hu, senior vice president, global CIO and chief delivery and technology officer for Lenovo’s solutions and services group. That role puts him in direct contact with enterprise customers.

First, customers are getting more strategic about how they use cloud computing, he said in an interview at CES. The public cloud is good for early experimentation or when a company needs to deploy over a large geography.

“But if you know what your workload size and predictability is, you don’t need to pay the additional premium,” he said.

Another adoption driver is the need to use data where it’s generated. “You don’t want to store all the data all the time,” Hu said. With an edge AI server, the data can be immediately used when needed, then discarded later.

Finally, there are the privacy, security and sovereignty concerns. “Everyone is very sensitive that they can control their data and govern it,” he said.

When a company runs its own AI inferencing, the data never needs to be out of its corporate hands. 

Lenovo wasn’t the only company making bets on AI inferencing this week. AMD announced the AMD Instinct MI440X GPU, designed for on-premises inferencing for enterprise AI. And while Lenovo rivals Dell and HPE didn’t announce similar hardware at CES, they did both release new inferencing servers last year.

Dell’s air-cooled PowerEdge XE9780 and XE9785 servers integrate into existing enterprise data centers, while the liquid-cooled Dell PowerEdge XE9780L and XE9785L servers support rack-scale deployment, the company announced last May.

For its part, HPE last March released its latest AI servers, including the HPE ProLiant Compute DL380a Gen12. It offers support for up to 16 GPUs as well as direct liquid cooling, and is purpose built for AI fine-tuning and inference. 

Given the rapidly increasing demand for enterprising inferencing, more announcements from the major players are likely this year.

Editor’s note: Lenovo paid for Maria Korolov’s transportation and hotel costs for this year’s CES, but had no editorial role in the creation of this story.

]]>
https://www.computerworld.com/article/4114579/ces-2026-ai-compute-sees-a-shift-from-training-to-inference.html 4114579Artificial Intelligence, CES, Data Center, Events, Generative AI, Industry, Servers
JP Morgan Chase wins the hunt for the Apple Card Thu, 08 Jan 2026 14:44:11 +0000

Following many months of speculation, Apple has accepted JP Morgan Chase as the new issuer of Apple Card, replacing Goldman Sachs. 

Apple has been on the hunt for a replacement card issuer ever since Goldman Sachs decided to abandon its push into retail banking. American Express, Synchrony Financial, and Capital One all expressed some interest in reaching the deal, but Chase took the prize. 

Chase will become the new card issuer with the transition to the new arrangement expected to take around 24 months. Apple Card will continue to work as it already does in the meantime. Some things don’t change. While Chase replaces Goldman Sachs as card issuer, Mastercard will continue to act as the payment processor for the card.

‘Shared commitment’ for future of Apple Card

From a highly enthusiastic start, Goldman Sachs seems to have become a less enthusiastic partner since deciding to abandon the retail banking market. This has likely prevented Apple from extending the card into new markets and building additional services for its financial product.

Jennifer Bailey, Apple’s vice president of Apple Pay and Apple Wallet, could be construed as having high hopes for the future, based on her Apple statement (italics mine): “Chase shares our commitment to innovation and delivering products and services that enhance consumers’ lives. We look forward to working together to continue to provide a best-in-class experience and exceptional customer service with Apple Card.”

“We’re excited to innovate together in the future,” said Allison Beer, Chase’s chief executive officer of Card & Connected Commerce.

Perhaps the biggest hint of expansive plans came from Mastercards’s president of the Americas, Linda Kirkpatrick, who said: “Innovation on Apple Card has taken the consumer payments experience to the next level, and we look forward to delivering simple, secure, and seamless payments at global scale.”

More challenging than before

The latter statement suggests an intention to make the service available more widely, though it’s possible the opportunity has already passed. Challenger banks are much more entrenched outside the US, even as international tension dents America’s singular brand appeal.

If you think back to the well storied launch of Apple Card, the world was quite different. At that time, the full impact of conflict and energy prices hadn’t struck and economies were in better shape. Apple hit that market with a cutting-edge financial product that attracted glowing praise from across consumers and the banking retail markets. JD Power’s vice president of banking and credit card services, Jim Miller, pointed out the opportunity that could be unlocked in emerging economies, where many were far more already accustomed to mobile banking services.

At the time of launch, Megan Caywood, Barclays’ global head of digital strategy, wrote, “Love seeing Apple nail the challenger bank playbook. It was recognized as the best US credit card by JD Power in 2021.

Unfortunately, early stumbles on the part of the new service, along with the decision to leave retail banking, put a brake on any plans that might have existed for a fast rollout of additional services, and the environment has changed.

That’s not necessarily a problem, given Apple’s vast brand appeal and the positive response its credit card service has already achieved. It is important to note that Chase is primarily a US consumer bank, which suggests the dream of an internationally available Appe Card could remain just that. 

What about the detail?

The deal ends years of speculation concerning the future of Apple’s $20 billion credit card portfolio. But getting there will take some time, with JP Morgan Chase putting aside $2.2 billion for potential credit losses during the move.

Goldman Sachs is also feeling some pain, taking a $1 billion loss on outstanding credit balances as part of the move. The truth seems to be that while Apple did attract lots of high-value customers — a 2020 survey claimed a third of Apple Card customers had annual incomes above $100,000 — it also attracted a hefty number of lower-income users and had user delinquency rates around 4%. (The industry average is 3.05%).

JP Morgan will also launch a new Apple savings account, though existing savers will be able to remain at Chase if they like.

Apple says the service will remain as it is, for now. That means no fees, 3% daily cash back, and useful tools such as Apple Card Family, installment payments, and savings. More information for existing Apple Card users is available here. The transition to Chase is subject to closing conditions and regulatory approvals, Apple confirmed.

You can follow me on social media! Join me on BlueSky,  LinkedInMastodon, and MeWe

]]>
https://www.computerworld.com/article/4114448/jp-morgan-chase-wins-the-hunt-for-the-apple-card.html 4114448Apple, Financial Services Industry, Industry, Markets, Vendors and Providers
Musk’s OpenAI lawsuit clears path to trial, putting Microsoft in the spotlight Thu, 08 Jan 2026 12:46:53 +0000

A federal judge has signalled that Elon Musk’s lawsuit challenging OpenAI’s transformation from a nonprofit to a for-profit entity will proceed to trial, adding legal uncertainty for enterprise customers that have built AI strategies around the ChatGPT maker’s technology.

US District Judge Yvonne Gonzalez Rogers said during a Wednesday hearing in Oakland, California, that there was “plenty of evidence” for a jury to consider Musk’s allegations that OpenAI violated its founding mission, according to Reuters.

“This case is going to trial,” Judge Gonzalez Rogers said at the hearing, Reuters reported. The judge indicated she would issue a written order addressing OpenAI’s motion to dismiss the case, but stopped short of a formal ruling.

The lawsuit alleges OpenAI co-founders Sam Altman and Greg Brockman fraudulently induced Musk to help establish and fund the organization in 2015 under the premise that it would remain a nonprofit dedicated to developing AI for humanity’s benefit, only to later pursue for-profit restructuring through a partnership with Microsoft.

Microsoft’s role under scrutiny

Microsoft, which has invested more than $13 billion in OpenAI since 2019, is also named as a defendant. The judge said she needs to determine whether to dismiss unjust enrichment allegations against Microsoft, which has accumulated a $135 billion stake in OpenAI and holds licensing rights to its technology, the report added.

A Microsoft attorney argued at the hearing that there was no evidence the company “aided and abetted” OpenAI, according to the report.

The case raises questions about vendor governance stability for enterprises that have integrated OpenAI’s models into business-critical applications through Microsoft’s Azure cloud platform or direct partnerships with OpenAI.

Vendor stability concerns for AI customers

The case comes as enterprises accelerate AI deployment, with global enterprise technology spending reaching $4.9 trillion last year, driven by AI investments.

The legal proceedings could affect enterprise confidence in OpenAI’s governance stability as companies evaluate long-term AI vendor relationships. OpenAI’s technology powers Microsoft’s Copilot products, which enterprises have integrated across Office applications and Azure cloud services.

The trial schedule remains unclear. Judge Gonzalez Rogers said she needs to determine trial logistics but did not set a specific date, the report added.

Governance structure at the center of the dispute

Musk, who co-founded OpenAI in 2015 and contributed approximately $38 million — roughly 60% of its early funding — left the organization in 2018 following disagreements over its direction, Reuters reported. He filed the lawsuit in August 2024.

OpenAI was founded as a nonprofit research organization with a mission to ensure artificial general intelligence benefits all of humanity. In 2019, the company transitioned to a “capped profit” structure, creating a for-profit subsidiary while the nonprofit parent retained control.

OpenAI is now pursuing further restructuring into a public benefit corporation.to become a public benefit corporation, which would significantly reduce the nonprofit’s oversight role. The restructuring is critical to OpenAI’s ability to raise additional capital and compete in the expensive AI development race. The company has said the nonprofit arm would remain and be well-resourced through the transition.

The lawsuit contends OpenAI abandoned its founding charter through these structural changes. Judge Gonzalez Rogers cited evidence, including a 2017 diary entry by OpenAI co-founder Greg Brockman in which he wrote, “We’ve been thinking that maybe we should just flip to a for-profit,” according to court documents referenced by Reuters.

Parties respond to the court decision

In a statement following the hearing, OpenAI called the lawsuit baseless, the report added. “Mr Musk’s lawsuit continues to be baseless and a part of his ongoing pattern of harassment, and we look forward to demonstrating this at trial,” the company said.

OpenAI attorneys requested that Judge Gonzalez Rogers enter judgment against Musk, arguing he had not shown a sufficient factual basis for fraud and breach of contract allegations. The company also contended Musk failed to bring his claims in a timely manner.

OpenAI has also filed counterclaims alleging Musk’s actions, including an unsolicited $97 billion takeover bid earlier this year, were designed to disrupt its business operations to benefit his competing venture. xAI and OpenAI did not respond to a request from ComputerWorld for comment.

]]>
https://www.computerworld.com/article/4114422/musks-openai-lawsuit-clears-path-to-trial-putting-microsoft-in-the-spotlight.html 4114422Artificial Intelligence, Generative AI, Technology Industry
‘A wild future’: How economists are handling AI uncertainty in forecasts Thu, 08 Jan 2026 11:00:00 +0000

Economists have time-tested models for projecting economic growth. But they’ve seen nothing like AI, which is a wild card complicating traditional economic playbooks.

Some facts are clear: AI will make humans more productive and increase economic activity, with spillover effects on spending and employment.

But there are many unknowns about AI. Economists can’t isolate AI’s impact on human labor as automation kicks in. Nailing down long-term factory job losses to AI is not possible.

AI also complicates capital expenditure projections. Heavy money is going into data centers and power plants, but how much this will translate into productivity gains — and thus whether demand for AI services will remain high — remains unclear.

Economists are weighing the likelihood of a slowdown in the US and global economy against the productivity gains AI is expected to bring. The Peterson Institute for International Economics, for instance, predicts that global gross domestic product (GDP) will slow in 2026, with AI offsetting some of the decline.

The Conference Board, a nonprofit economic think tank based in New York, estimates that the US GDP will grow around 1.9% annually from 2025 to 2039, down from 2.4% growth from 2000 to 2024. AI will lift some of that decline, said Erik Lundh, senior global economist for The Conference Board Economy, Strategy & Finance Center.

To arrive at this projection, TCB factored AI’s uncertain crosscurrents — such as AI productivity gains — into its models along with established variables, such as long-term trends in total-factor productivity, labor, and capital.

But the projection “does not adequately capture the potential of a sea change… like artificial intelligence,” Lundh said.

Computerworld sat down with Lundh to understand AI’s big-picture impact, how it is being quantified, and how such metrics help business and policy planners. This interview has been condensed and lightly edited for clarity.

The Conference Board projections show US GDP growing at an average rate of 1.9% from 2025 to 2039, slower than the 2.4% growth from 2000 to 2024. Does AI meaningfully offset some of that slowdown? “Yes. The US GDP projection of 1.9% from 2025–2039 … reflects that there’s going to be less bang on the capital and labor side. Productivity associated with technological developments — including AI — does offset more of the slowdown.

“We’re seeing an increase in terms of productivity enhancements over the next decade and a half. While it doesn’t capture AI directly… there is all kinds of upside potential to the productivity numbers because of AI.

“The same is true of the global economy. Emerging markets are going to be growing faster than advanced economies are — and they have been  — but again, there is an expectation that AI will play a role in terms of augmenting the kinds of productivity that we see over the coming years.”

As AI becomes a bigger part of the economy, will it change the way we measure growth? And as we go forward, will AI’s impact on GDP keep increasing? “It helps to make a distinction in terms of AI’s contribution. On one hand, we’re seeing a lot of stories about data centers being built, electricity demand rising, and power plants being dusted off or newly planned to support AI. When you build a data center or a power plant, you create real economic activity — the planning, the materials, the labor that goes into erecting these things. That shows up as capital contribution to growth because it’s physical investment.

“But beyond that, you also get productivity enhancements afterward. It’s similar to infrastructure buildout. If you build a new port or airport, you spend money up front, but then it becomes cheaper to ship goods or move workers, and that long-term efficiency shows up on the productivity side.

“AI will likely have similar spillover effects once the infrastructure is in place. How large those effects will be is unclear, which is the core challenge… estimating the relationship between AI and productivity.”

How exactly could AI change productivity and investment patterns across the economy? “There are basically two ways this can go. You can get more output for the same input. If you used to put in 100 and get 120, maybe now you get 140. That’s an expansion in total factor productivity. Or you can get the same output with fewer inputs.

“It’s unclear how much of either will happen across industries or in the labor market. Will companies lean into AI, cut their workforce, and maintain revenue? Or will they keep their workforce, use AI to supplement them, and increase total output per worker?

“R&D spending is also a question mark. AI can allow researchers to do more, faster, and with fewer resources. But that could either mean less R&D spending is needed, or it could inspire even more investment because the return on R&D becomes higher. We don’t yet know which direction it will go.”

The US is spending much more on AI than the rest of the world. Does that make your US productivity projections different from other economies? “Yes, the productivity numbers we’re seeing in the US modeling work are elevated, both compared to what we had previously projected and compared to some historical periods. But we’re also seeing upticks in other parts of the world. China, for example, shows increased productivity projections as well, and that reflects its serious investments in AI capabilities.

China is in the process of developing its next five-year plan — the 15th — and a lot of attention is going into building a more advanced manufacturing environment and next-generation technologies like artificial intelligence. Of course, it’s a moving target: access to high-end chips, the development of domestic alternatives, and broader geopolitical dynamics all play a role.

But China has a large technical talent base and significant government funding aimed at making AI a key part of its growth environment over the next decade.

The US and China are ahead in the AI curve. For developing economies, how does AI change their growth paths? “One of the advantages many of them — like Vietnam, Bangladesh, Kenya, or parts of sub-Saharan Africa — have historically relied on is a labor-arbitrage system, where it simply costs less to produce goods because labor is cheaper. That’s how countries such as China, Taiwan, and Singapore worked their way up global value chains over time.

“But with AI, that can become disruptive. If AI and automation remove the human element from labor-intensive manufacturing, that cost advantage erodes. It makes it harder for developing countries to use cheap labor as a stepping stone toward industrialization.

“At the same time, businesses and consumers in these economies… can still use AI tools to become more efficient. That’s the tailwind.

“So there are both headwinds and tailwinds for emerging markets that may not have the resources or technical know-how to build out AI domestically but will still feel its effects as the technology spreads.”

VCs say they don’t want to fund yet another coding tool or AI search engine. They want AI that transforms the physical world, like robotics, safety tech, or manufacturing tools. That’s where they see trillion-dollar impact. How do you view that? “It’s interesting, and I agree to an extent. But the US is a services-oriented economy, so even if AI eventually reshapes the physical world, the more immediate impacts will be in services. That’s the largest share of our economy. And you don’t need a robot to see disruption.  See AI call centers, chatbots, automated accounting, paralegal tools. These can replace tasks that used to require people, and do it for a fraction of the cost.

There may eventually be a pivot back toward manufacturing as physical AI develops, and some in the political world would like that. But in the near term, AI’s biggest effects will likely show up in the services sector long before they show up on an assembly line in Georgia.

As AI accelerates, what uncertainties or unknowns stand out to you when you think about the future of economic analysis? “This is an emerging story. The technology is changing month to month. I’m using it professionally, and it’s making me more efficient.

“I don’t know what this looks like in five or ten years, or whether the economist profession will face the same fate as others, with a reduced need for bean counters like me. It’s a wild future. I can’t predict it with any certainty.”

]]>
https://www.computerworld.com/article/4108089/a-wild-future-how-economists-are-handling-ai-uncertainty-in-forecasts.html 4108089Artificial Intelligence, Generative AI, Industry, IT Leadership, IT Strategy, Markets, Technology Industry
Arm reorganizes around Physical AI as enterprise robotics gains momentum Thu, 08 Jan 2026 10:49:24 +0000

Arm has created a new Physical AI unit focused on robotics and automotive systems, a sign that enterprise AI is increasingly moving out of the data center and into machines operating in the physical world.

As part of the reorganization, Arm has split its operations into three core groups, separating cloud and AI technologies, edge products such as smartphones and PCs, and a newly formed Physical AI division that brings automotive and robotics under one roof, according to Reuters.

Arm’s decision comes as enterprises experiment with robotics beyond pilots, deploying autonomous systems in factories, warehouses, and logistics operations where real-time decision-making matters more than raw compute power.

This shifts AI workloads toward the edge, forcing CIOs to prioritize device reliability over cloud scale.

Enterprise implications

Arm’s move marks a structural shift in how computing is being aligned for robotics and automotive systems.

“The industry has moved through three distinct phases in the three years since the ‘ChatGPT moment’, from generative AI to agentic AI and now Physical AI,” said Neil Shah, vice president for research at Counterpoint Research. “Bridging digital agents to physical robots requires a massive investment in synthetic data. Unlike agentic AI, which can be trained on text or code, Physical AI requires ‘world models’ trained on high-fidelity video and physics simulations.”

For enterprises, this means planning infrastructure capable of supporting heavy, simulation-driven workloads needed to train robots across a wide range of real-world scenarios, Shah added.

Physical AI is also changing where AI workloads are executed. Arm’s approach shifts more inference and control functions toward edge and on-device environments, particularly for robotics and other real-time systems.

“These workloads require ultra-low latency, energy efficiency, and resilience, which centralized cloud cannot always deliver,” said Biswajeet Mahapatra, principal analyst at Forrester. “CIOs should adopt hybrid architectures: inference and control tasks at the edge or on-device using Arm-based accelerators, while training and large-scale analytics remain in the cloud.”

Networking also becomes a critical factor. Physical AI systems depend on predictable, low-latency connectivity to coordinate sensors and controllers in real time, particularly in factories and warehouses. This can push enterprises to revisit industrial networking designs, with greater emphasis on deterministic performance using technologies such as private 5G, Wi-Fi 7, and time-sensitive networking.

“The result is not cloud displacement, but a rebalance: the cloud serves as the system of learning and coordination, while Arm-based edge and device environments handle real-time perception, decisions, and physical action,” said Manish Rawat, semiconductor analyst at TechInsights.

Steps for CIOs

Preparing for Physical AI requires changes across the technology stack. “IT leaders need to optimize operating systems, AI frameworks, and container platforms for Arm architectures,” Mahapatra said. “Security and lifecycle management for distributed robotics systems must be strengthened. Running pilot projects with Arm-based robotic applications will help validate performance and integration before scaling.”

Rawat noted that enterprises should start by treating robotics and Physical AI as an extension of their core IT stack, not a niche OT experiment.

“This means designing applications with clear separations between training, orchestration, and real-time execution, so components can move cleanly between cloud and Arm-based edge or device platforms,” Rawat said.

The guidance reflects a shift toward treating robotics and Physical AI as long-term infrastructure investments, rather than standalone automation projects.

Arm’s enterprise strategy

With its increased focus on Physical AI, Arm is aiming to design highly optimized architectures as the AI economy shifts from paying for tokens generated to paying for precision in real-time decision-making in physical environments.

“Arm is designing end-to-end architecture to support decision making at the edge,” Shah said. “By standardizing on Arm across both the server and the robot, enterprises can create a ‘seamless compute fabric’ that allows these AI models to move from the cloud to the edge without rebuilding the underlying software stack.”

Standardizing on Arm can reduce fragmentation across device classes, streamline developer skills, and improve portability of workloads from data center to edge to machine.

“However, the risk lies less in vendor lock-in and more in dependency on Arm’s licensing and roadmap decisions as it moves closer to full chip designs,” Rawat said.

For most enterprises, adoption is expected to be gradual. CIOs are likely to begin with targeted deployments in controlled settings, such as factories or warehouses, before scaling robotics and autonomous systems more broadly across their operations.

]]>
https://www.computerworld.com/article/4114329/arm-reorganizes-around-physical-ai-as-enterprise-robotics-gains-momentum.html 4114329Artificial Intelligence, CPUs and Processors, Robotics
Companies can compete against AI by delivering what AI can’t Wed, 07 Jan 2026 20:41:00 +0000

The pressure for businesses to leverage generative AI (genAI) or agentic AI is massive, but when I hear executives complaining that there is no way to compete against businesses leveraging the fast-moving technology, I’m forced to chuckle. 

There is a powerful way to compete against genAI and it’s textbook simple: note all of the problems with the tech — the list is extensive — and build your value-add on those. Here’s a rundown of some of the competitive advantages companies can offer.

Reliability

GenAI systems have low reliability. That doesn’t mean most of the answers/recommendations it generates are necessarily wrong. It’s simply that errors occur frequently, with no particular pattern. 

That could be from a variety of causes, including: hallucinations; insufficient/outdated/poor-quality data that it was trained on; problems with the data that was used to fine-tune the model (the fine-tuning data might have also been accurate, but it somehow interfered with or contradicted the core training data); the system could have misinterpreted a query; the user could have mis-phrased a query; or a dozen other things.

Another data accuracy/reliability issue involves language. GenAI models’ accuracy plunges when it is dealing with non-English information. For the typical multinational company, that could be a massive problem. 

And these systems often ignore guardrails, meaning they may choose to ignore any restrictions you try and impose. 

Add that all up and IT directors simply cannot rely on these systems. It’s like working with a brilliant employee who will periodically make stuff up in an official report. When confronted, the employee is apologetic but stresses that he or she will continue to make stuff up. Can you trust that employee with important work?

I was recently talking with a cybersecurity vendor that decided to avoid genAI tools — and suffer all of its compliance, cybersecurity, accuracy and data leakage issues — and  instead simply rely on AI Machine Learning. Alpha Level, which deals with event alert triage, uses an ML approach known as Time Series modeling. It also claims the cost is far lower, at least at the enterprise volume level.

Real-world expertise

Some executives talk about leveraging expertise as a way to compete against genAI. That is a decent point, but it has to be information that beats genAI.

Consider a law firm. Even more narrowly, consider case law, where attorneys try to find precedent for an argument they want to make. At one level, genAI tools can win that battle. They literally can memorize every word of every court decision — globally, if need be. No lawyer can do that.

But case law research is not merely about reading cases. The attorney needs to understand the intent, the nuances of a case and the relevant history. No genAI system can do that.

Early in my reporting career, I was a full-time court reporter for a daily newspaper. One afternoon, I found myself in the courthouse basement in the law library. In the back, I saw the managing partner of one of the state’s largest law firms, flipping through books. 

I asked him why he was doing such work when he could easily assign it to a more junior lawyer. He smiled and said, “I’ve been doing this for 40 years. I routinely find obscure cases that these young hotshots would never find. I simply know where to look and how to interpret them.”

That is precisely the kind of mastery that will elude genAI.

Another example is in an area close to where I live: journalism. Some media outlets are trying to use genAI to write stories. There are some very basic stories where that might work , such as routine weather reports, maybe sports scores and perhaps even obituaries.

But the ultimate story is what used to be known as “man bites dog” and today is simply “surprising the reader.” To do that, a reporter must find things that readers don’t know and that contradicts what they do know. That is exactly what genAI cannot do. Everything the technology churns out is simply a reworded version of what has already been said. 

If you look at fiction writers, such as those writing movie or television scripts, a similar discovery is made. GenAI could replace really bad writers. But the nature of genAI would almost certainly prevent it from writing hit shows where audiences are quoting lines the next day. 

Data Leakage

Data leakage and the related “lack of data control” is connected to how these systems grow. Are they training on the queries made? Will the information shared in a query on Monday find its way into an answer given to a competitor on Friday?

There are straight-forward ways to limit such leakage, whether through open source, on-prem closed systems, or even the extreme of using air-gapped systems. (CapitalOne is a great example of an enterprise toying with such limitations to safely use genAI.) 

If a business created a closed-loop system to deliver the flexibility of genAI without the data risks, that could potentially do absurdly well . 

Agentic AI

Agentic systems are simply begging for a company to devise a locked-down system for leveraging agents without the massive risks

In short, there are quite a few powerful ways to compete successfully with genAI and agentic systems. Just don’t ask ChatGPT to recommend any.

]]>
https://www.computerworld.com/article/4114017/companies-can-compete-against-ai-by-delivering-what-ai-cant.html 4114017Artificial Intelligence, Generative AI, IT Leadership, IT Strategy
5 areas of ITSM being transformed by automation in 2026 Wed, 07 Jan 2026 20:09:52 +0000

Automation is transforming IT service management (ITSM), moving service desks from reactive, manual workflows toward systems that can intelligently route, prioritize, and resolve issues with minimal human intervention.

Recent research from Freshworks found that IT professionals lose nearly seven hours every week—almost a full workday—to fragmented tools and overly complicated work processes. Implementing ITSM automation reduces manual effort, accelerates resolution, improves consistency and accuracy, enables proactive issue prevention, and delivers faster, more reliable service that measurably improves employee and end-user satisfaction.

As hybrid infrastructure, distributed teams, and rising service expectations become the norm, automation must embed across real workflows, from incidents and requests to assets and changes, to deliver meaningful results.

How is automation improving ITSM?

ITSM automation streamlines the service lifecycle with event-driven workflows, intelligent triage, and rule-based actions. By automating repetitive processes like ticket handling and administrative tasks, businesses ensure routine tasks are executed quickly, predictably, and consistently.

Key elements of ITSM automation include:

  • Automated workflows: Route, escalate, and resolve tasks based on predefined rules
  • Service desk automation: Fast ticket creation, categorization, and assignment
  • SLA automation: Monitor deadlines, apply timers, and trigger alerts
  • Event-driven automation: Act on system alerts and performance thresholds
  • Intelligent triage: Sort and prioritize issues based on impact and urgency

These capabilities lay the foundation for faster, more reliable service delivery across IT operations.

What are the five core areas of ITSM being transformed by automation?

1. Incident and request management: Automation speeds up ticket creation, classification, assignment, and resolution, including:

  • Auto-categorization and routing
  • Suggested solutions from knowledge articles
  • Automated status updates and notifications
  • Faster mean time to resolution (MTTR)

2. Change and release management: Automation standardizes change approvals and helps reduce risk with:

  • Preconfigured approval workflows
  • Impact-based change routing
  • Automated deployment tasks

3. Asset and configuration management: ITSM platforms use automation to maintain accurate inventories and compliance through:

  • Automated asset discovery
  • Continuous configuration item (CI) updates
  • Event-based alerts for hardware and software drift

4. SLA compliance and governance: Automation helps teams maintain predictable service quality with:

  • SLA timers and escalation logic
  • Auto-remediation actions before an SLA breach occurs
  • Consistent, auditable process execution

5. Cross-department workflows: Automation orchestrates tasks across IT, HR, Facilities, and other teams, including:

  • Onboarding / offboarding workflows
  • Access provisioning
  • Multistep request fulfillment

How is AI accelerating ITSM automation?

AI amplifies ITSM automation by adding prediction, guidance, and pattern recognition beyond what traditional workflows offer. For example: 

  • Predictive insights: Identify recurring issues, detect anomalies, and anticipate outages for proactive service management
  • Intelligent triage: Analyze ticket data, recommend prioritization, categorize requests, and surface likely root causes

By combining AI with automation, IT teams can move from reactive service to predictive, data-driven operations.

How can companies start implementing ITSM automation?

Implementing ITSM automation doesn’t have to be overwhelming. Businesses can begin small and scale up over time:

  1. Identify high-volume, repetitive tasks: Password resets, access requests, ticket classification, and similar frequent workflows. 
  2. Standardize processes before automating: Clear, repeatable workflows ensure reliable results. 
  3. Prioritize SLAs and high-impact workflows: Automate where it reduces risk, accelerates resolution, or improves user experience. 
  4. Expand into cross-department orchestration: Connect IT with HR, Facilities, Legal, and other teams to streamline end-to-end processes. 
  5. Integrate AI carefully: Use AI for triage recommendations, predictive analytics, and automated diagnostics to enhance—not replace—core workflows. 
  6. Choose a unified ITSM platform: A centralized platform with built-in automation and AI capabilities ensures scalability and consistency. 

Apply ITSM automation at scale with Freshservice

For organizations looking to scale ITSM automation, Freshservice from Freshworks provides a centralized platform and the infrastructure needed to deploy AI-driven workflows, intelligent triage, and automated governance across service operations. The platform helps IT teams achieve faster resolution, improve SLA compliance, and increase operational resilience.

To learn more, visit us here.

]]>
https://www.computerworld.com/article/4114012/5-areas-of-itsm-being-transformed-by-automation-in-2026.html 4114012IT Strategy
At CES, AI moves beyond chatbots and agents into the physical world Wed, 07 Jan 2026 19:47:39 +0000

LAS VEGAS — AI everywhere was the big theme at CES this week, with the technology showing up in nearly every application and device, no matter whether users actually want it or not.

Beyond AI-enabled refrigerators and smart glasses, the transformative story of this year’s show actually had an enterprise bent: the arrival of physical AI. That’s the kind of AI that makes robots smart and that makes autonomous cars safe for the road. (Though robots and cars are the most visible part of the physical AI picture, they’re also the smallest.)

The bigger impact of physical AI is in large-scale industrial applications. What is an assembly line, after all, but a giant robot composed of hundreds or thousands of smaller ones?

And why stop at assembly lines? Why not treat an entire factory as one giant robot? In fact, why stop at a single factory when you can look at the entire supply chain — all the factories, all the partners and suppliers, perhaps even the customers.

Say, for example, a bumper falls off one of the specialized industrial vehicles made by the Oshkosh Corporation.

“You can have traceability,” Jay Iyengar, the company’s executive vice president, CTO, and strategic sourcing officer, told Computerworld. For example, was there enough torque applied to the bolt at the factory? 

“So, you go back and, say, ‘Our manufacturing plant was okay and there was nothing wrong with the torque.’ Then, when was this bumper manufactured and by which supplier? We can trace it back to the supply chain, all the way through.”

And once the visibility is in place, the entire AI-powered manufacturing system can autonomously — or semi-autonomously — take action to correct the problem. “That’s the euphoria you get from that,” Iyengar said.

Oshkosh isn’t ready to update everything to AI right away, however. Even once Oshkosh’s factories are brought into the industrial AI era — it is starting the upgrades this year — all the other companies in the supply chain have to come on board as well. The bigger suppliers are well on their way to full digitization and visibility, but some of the smaller players face a bigger challenge.

“There’s a lot of work involved,” she said. “It’s going to take us several years to get to that level.”

Autonomous cars 

At CES, there were a number of announcements related to physical AI. Nvidia, for instance, talked about autonomous car AI.

In a keynote speech Monday, Nvidia President and CEO Jensen Huang announced the release of Alpamayo, an open-source AI model designed specifically for autonomous cars.

Having AI that understands not just car systems but also how the world works is key to making autonomous vehicles safer, he said. “If a ball rolls out into the street, a child might be following quickly behind,” Huang said.

The first car built atop this platform — the all-new Mercedes‑Benz CLA — will hit the market in the first quarter of this year, he said. The vehicle has already received a five-star safety score from EuroNCAP, the European New Car Assessment Program.

“It was just rated the world’s safest car,” Huang said, noting that Uber and BYD are also using the new autonomous car AI platform.

Autonomous factories

Nvidia is looking beyond cars — or individual robots — as applications for its focus on physical AI.

Caterpillar, for example, uses Nvidia technology and has built not just the world’s largest robot, but is looking to scale up the technology to its factories, Huang said. “These manufacturing plans are going to be, essentially, giant robots,” he said.

But while Nvidia makes open-source AI models for things like robots, cars, and real-world physics, it doesn’t have the data and expertise to make entire factories autonomous.

The company that emerged as the leader in that area this week was Siemens, which was founded back in 1847 as a small machine shop and has since grown into a global tech company. Nvidia plans to use Siemens’ technology to help improve its own chip factories, and the chip design process itself, even as Siemens relies on Nvidia’s models as part of its own AI development.

Siemens already has a big presence in the global manufacturing sector. One out of every three manufacturing machines worldwide runs a Siemens controller, Siemens President and CEO Roland Busch told reporters Monday.

That’s a lot of data for the company to work with and allows it  to build what Busch called the industrial AI operating system.

That includes the Digital Twin Composer, which will allow companies to create digital twins of entire factories. And these twins won’t be used simply to offer a real-time view of operations or to help companies simulate individual future scenarios.

This tool, powered by AI, will also allow companies to predict failures before they occur and to autonomously — or semi-autonomously — take action to remediate problems or rearrange production and schedules around the problem until it can be fixed.

This digital twin can also understand the physics behind how components interact, predict events that might not be in its training data set, and pull in information such as weather data from external sources. 

Siemens said its first autonomous factory will come online in Germany this year and the Digital Twin Composer tool will be on the market by mid-year. That said, some customers have already moved ahead with the technology.

Pepsico, for example, rolled it out last year, said PepsiCo’s Athina Kanioura, CEO for Latin America and the company’s global chief strategy and transformation officer. “We have had significant impact, even in the first three months,” she said.

Kanioura was one of several corporate leaders who participated in the CES opening keynote. “In the Gatorade plant in the US, we were able to increase efficiency 20% in just the first three months,” she said. “And we had a capex reduction of 10 to 15%.”

The biological frontier

Siemens also envisions its digital twins as helping automate the pharma drug discovery process. Its acquisition of Dotmatics, a life sciences R&D software company, was completed in mid-2025.

“Drugs are getting more expensive every time we go through a new development cycle,” Siemens’ Busch told reporters. “And when you target specific long-tailed diseases, the costs become prohibitive.”

The biggest problem is that drug discovery is still very much a labor-intensive process, he said. But there are opportunities to apply industrial manufacturing principles to the research and development process.

“What if we can do the same things for cells and how the cells behave?” he said. “And look at antibodies, and drug compounds — and simulate how they would interact with antibodies.”

He estimated that the technology Siemens is building on top of the Dotmatics platform will accelerate the lab-to-patient cycle by 50%.

Editor’s note: Lenovo paid for Maria Korolov’s transportation and hotel costs for this year’s CES, but had no editorial role in the creation of this story.

]]>
https://www.computerworld.com/article/4113996/at-ces-ai-moves-beyond-chatbots-and-agents-into-the-physical-world.html 4113996Artificial Intelligence, CES, Emerging Technology, Events, Generative AI
CES 2026: Samsung previews the future of the iPhone? Wed, 07 Jan 2026 17:02:37 +0000

After years and years of work, Apple at last has the crease-free folding display it needs for the iPhone Fold — and Samsung is displaying some of the first manufacturing prototypes at this year’s giant Consumer Electronics Show (CES).

Reports from the show floor seem excited by the display, confirming it displays no visible crease whatsoever. It’s made by Samsung’s display arm, which is also an Apple partner. SamMobile claims the panel is set for use in the upcoming Galaxy Z Fold 8, but we believe Apple and Samsung Display have been cooperating on design and development for the folding screen, and we expect a version of this will also be used in the iPhone Fold later this year.

Morgan Stanley analyst Erik Woodring this week confirmed the first foldable iPhone, “Remains on track for a Fall 2026 launch, with supply chain forecasts targeting 15-20m units for the first full-year cycle, or 7-8m units of C2H26 production, subject to future change,” according to a client note seen by Computerworld.

Apple’s work is almost complete

Apple has been working on a folding iPhone for over a decade, filing patents for such a device since 2014. All the same, despite those efforts, it has never actually introduced such a device.

I believe this is because the company needed to wait until manufacturing technology had evolved to a point where it became possible to make hard-wearing, resilient folding displays that did not snap in use and do not possess a visible crease. The last thing Apple wants is for tens of millions of iPhones to snap in normal use — that kind of live experimentation would make “antenna-gate” seem like a vacation in Hawaii. At the scale of iPhone sales, Apple simply could not afford to introduce this device until those concerns — in the glass and also the hinge — were resolved.

What Samsung is showing at CES seems to meet Apple’s demands. It also seems to deliver on what we expect Apple will introduce in its folding phone, specifically an almost totally invisible hinge and an OLED display.

New partners enter the frame

Part of what makes this possible is a new metal plate tech supplied by South Korea’s Fine M-Tec, which Apple analyst Ming-Chi Kuo last year said would make something similar for the iPhone. These metal plates help mitigate the stress on the display hinge, enabling the display material to avoid becoming visibly creased. It seems to be significant that the purported Apple partner last year announced a $12 million investment in new equipment and facilities to expand production of these metal plates for folding smartphones. These laser-drilled components are likely to be rolling off the production line now, and will be used by both Apple and Samsung. 

The metal plate design is only part of the display innovation Apple and its Samsung manufacturing partner have had to achieve to reach this point. It’s also important to think about the other half of the equation in play with folding iPhones, which is what people will do with these things. We kind of know the answer to this now each time we open an iPad mini, which offers just slightly more display (8.3-inch) than the 7.8-inch we expect from the iPhone Fold.

The 120.6mm-by-167.6mm (unfolded) device will use an Apple 5G modem, TouchID, eSIM, and four cameras in a device that folds out to be around as thin as a 5.6mm iPhone Air. (I consistently refer to the device as iPhone Fold, but we don’t yet know what Apple will actually call it.)

Apple silicon is enabling new designs

What makes the device possible isn’t just the hinge or the display — the ability to create super-slim folding smartphones that aren’t compromised in terms of performance or computational efficiency comes as a direct result of Apple Silicon. What that means is that the new folding technology Samsung is offering a partial glimpse of at CES will also be one of the best performing smartphones money can buy. We know this because Apple’s existing iPhones already lead the business.

And what that means, in short, is that Apple’s folding device is likely to have been worth waiting for. We’ll find out more this fall.

You can follow me on social media! Join me on BlueSky,  LinkedInMastodon, and MeWe

]]>
https://www.computerworld.com/article/4113862/ces-2026-samsung-previews-the-future-of-the-iphone.html 4113862Apple, CES, iPhone, Mobile, Mobile Phones, Samsung Electronics, Smartphones
HP’s new computer is built into the keyboard Wed, 07 Jan 2026 14:36:19 +0000

In conjunction with the CES show in Las Vegas, HP has shown off the Eliteboard G1a, a keyboard computer aimed primarily at business users.

Despite the fact that the computer is built into the keyboard, we are offered decent specifications, namely an AMD Ryzen AI processor, AMD Radeon 800 graphics, up to 64 gigabytes of ddr5 memory, up to 2 terabytes of storage and dual ports for usb-c.

The dimensions are 358 x 118 x 17 mm, while the weight is 768 grams.

According to HP, the Eliteboard G1a is a full-fledged “AI computer”, which means that it is powerful enough to run AI models locally.

Unfortunately, at the time of writing, we have not been told the price or launch date. Nor do we know if the computer will be sold on the Swedish market.

This article originally appeared on Computer Sweden, for further reporting, see “HP bets on keyboard-based PCs for the hybrid workforce“.

More HP news:

]]>
https://www.computerworld.com/article/4113831/hps-new-computer-is-built-into-the-keyboard.html 4113831Commercial Providers, Computers, Computers and Peripherals, HP, Vendors and Providers
Common health questions to ask Chat GPT Wed, 07 Jan 2026 14:29:19 +0000

Open AI has presented a new report entitled AI as a Healthcare Ally.

In the report, we learn that it is very common for users to ask questions about their health to Chat GPT.

One in four users (over 200 million) asks health-related questions every week, while one in twenty users (over 40 million) asks health-related questions every day.

In addition to questions about symptoms or medicines, questions about health insurance are also common, especially in the United States.

Of course, relying on advice from AI tools can be dangerous, so if you’re worried about something, it’s better to go to your local health center.

This article originally appeared on Computer Sweden.

More on ChaptGPT:

]]>
https://www.computerworld.com/article/4113826/common-health-questions-to-ask-chat-gpt.html 4113826Artificial Intelligence, Generative AI
Android source code will now only be released twice a year Wed, 07 Jan 2026 14:10:06 +0000

Ever since the first version of Android was released in 2008, anyone who wanted to could access the source code of the operating system.

However, Google has now announced that the source code will only be released twice a year, once in the spring and once in the fall.

Since there are four major updates to Android each year, the source code will only be released for half of them.

According to Google’s spokesperson, the purpose of the change is to provide developers with “code that is more stable and secure”, Android Authority reports.

This article originally appeared on Computer Sweden.

More Android news and insights:

]]>
https://www.computerworld.com/article/4113818/android-source-code-will-now-only-be-released-twice-a-year.html 4113818Android, Operating Systems
Chinese authorities scrutinize Meta’s purchase of AI startup Manus Wed, 07 Jan 2026 13:57:12 +0000

Last week, news broke that Meta is buying Chinese AI startup Manus for around $2 billion. The company is known for its AI agent that can handle everything from job interviews to stock analysis. Meta plans to integrate Manus’ AI agent into its own products.

Now, the Financial Times reports that China’s Ministry of Commerce has decided to review the purchase to determine whether the deal violates the country’s export control rules for technology. Manus was founded in Beijing but moved its team and technology to Singapore in the summer of 2025, where the company now operates under the name Butterfly Effect Pte.

According to the Financial Times, the deal has raised concerns in China as it could encourage more tech companies to move abroad to avoid domestic regulation. At the same time, Manu’s technology is not considered strategically critical, so intervention is not certain.

The review is currently said to be at an early stage and may lead to export license requirements. Neither China’s Ministry of Commerce nor Meta have commented on the data while Manus declined to comment.

This article originally appeared on Computer Sweden.

More Meta news:

]]>
https://www.computerworld.com/article/4113806/chinese-authorities-scrutinize-metas-purchase-of-ai-startup-manus.html 4113806Artificial Intelligence, Industry, Mergers and Acquisitions
An instant Android search upgrade Wed, 07 Jan 2026 10:45:00 +0000

Sometimes, you stumble onto a digital discovery of some sort and need a touch of time to wrap your head around if or how it could help you.

Other times, you slap something new onto your favorite phone and instantly realize your life has been upgraded.

Today, my friend and fellow holiday-hibernation waker, we’ve got a treasure of the latter variety to feast on for the start of 2026. It’s a snazzy little somethin’ I sunk my teeth into over the break and have been champing at the bit to share with ya since.*

Prepare to have that instant life upgrade sensation.

* By “champing at the bit to share with ya,” I mean “I briefly thought, ‘Ooh, I should totally write about this at some point!’ and then fell quickly back into a deep slumber, waking only for occasional pancake breaks, prior to my alarm going off and forcing me to return to the real world this morning.” But, alas. Here we are.

[Psst: Love shortcuts? My free Android Shortcut Supercourse will teach you tons of time-saving tricks for your phone. Sign up now to get started!]

Android search — supercharged

First things first, a quick hit of reality here: As exceptionally smart and stunning mammals who choose to carry cellular telephones based on Google’s mobile operating system, you’d think we’d have superb search powers at our fingertips 24/7.

I’m not talkin’ about regular ol’ web search or any manner of large-language-model chatbot chicanery, either. I’m talkin’ about the simple-seeming ability to search your entire device and everything on and around it in a single, streamlined spot — something you’d think gadgets connected to Google, of all places, would offer out of the box in a blow-your-mind kind of way.

But alas: For whatever reason, Android has never quite managed to achieve that and latch onto that ironically Googley advantage so many of us would appreciate.

And that’s where the tool I have for you today comes into play.

It’s an incredibly handy app called Pixel Search. But don’t let that name fool ye: It’s every bit as useful whether you’re using a Google-made Pixel gizmo or any other Android phone.

Pixel Search does one thing and one thing only: It gives you a new all-purpose search prompt on your home screen that’s lightyears ahead of whatever search system you’re relying on now — including, very much, Google’s own.

With one tap on the Pixel Search prompt, in that single streamlined space, you can:

  • Search the web — using Google or any other search service you like
  • Search Gemini, Perplexity, or any other LLM chatbot-style service
  • Search through your own previous web searches, as made through the app
  • Search through your contacts, then call or text any of ’em with one more tap
  • Search through files you’ve downloaded or otherwise transferred to your device’s local storage and perform a variety of actions on any file you find
  • Search for and then quickly open apps you’ve installed
  • Search for and then quickly open specific app shortcuts — to jump directly to actual actions within apps you use
  • Search inside Google Maps, YouTube, the Google Play Store, and other available services
  • Search inside your system calculator and see instant answers to problems and equations
Pixel Search - Android search
Pixel Search’s everything-together search results.

JR Raphael, Foundry

I’m tellin’ ya: This thing is phenomenal. It picks up exactly where an age-old Android go-to called Sesame Search left off, prior to its recent abandonment. And it couldn’t be much simpler to set up and start using:

  • First, snag the Pixel Search app from the Play Store. It’s free, with completely optional donations to the developer if you want to support the effort (but no features locked behind paywalls or anything like that).
  • Open ‘er up and follow the prompts to grant the app the access it needs to operate.
    • The app requires just a few lower-level permissions if you want it to be able to search through your contacts and files and also initiate phone calls for you, but it doesn’t request any deep levels of access — and its developer is clear about the fact that it doesn’t collect or share any manner of personal data.
  • Aaaand — well, that’s pretty much it.

At that point, you should be sitting at the main Pixel Search search screen, and you can test it out for yourself to see how it works. Just start typing any character into the search box, and you’ll see results pop up across all the appropriate areas.

srcset="https://b2b-contenthub.com/wp-content/uploads/2026/01/pixel-search-android-search-files.png?quality=50&strip=all 2160w, https://b2b-contenthub.com/wp-content/uploads/2026/01/pixel-search-android-search-files.png?resize=300%2C295&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2026/01/pixel-search-android-search-files.png?resize=768%2C756&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2026/01/pixel-search-android-search-files.png?resize=1024%2C1008&quality=50&strip=all 1024w, https://b2b-contenthub.com/wp-content/uploads/2026/01/pixel-search-android-search-files.png?resize=1536%2C1513&quality=50&strip=all 1536w, https://b2b-contenthub.com/wp-content/uploads/2026/01/pixel-search-android-search-files.png?resize=2048%2C2017&quality=50&strip=all 2048w, https://b2b-contenthub.com/wp-content/uploads/2026/01/pixel-search-android-search-files.png?resize=708%2C697&quality=50&strip=all 708w, https://b2b-contenthub.com/wp-content/uploads/2026/01/pixel-search-android-search-files.png?resize=171%2C168&quality=50&strip=all 171w, https://b2b-contenthub.com/wp-content/uploads/2026/01/pixel-search-android-search-files.png?resize=85%2C84&quality=50&strip=all 85w, https://b2b-contenthub.com/wp-content/uploads/2026/01/pixel-search-android-search-files.png?resize=487%2C480&quality=50&strip=all 487w, https://b2b-contenthub.com/wp-content/uploads/2026/01/pixel-search-android-search-files.png?resize=366%2C360&quality=50&strip=all 366w, https://b2b-contenthub.com/wp-content/uploads/2026/01/pixel-search-android-search-files.png?resize=254%2C250&quality=50&strip=all 254w" width="1024" height="1008" sizes="auto, (max-width: 1024px) 100vw, 1024px">
You can find and take action on almost anything imaginable within the Pixel Search search results.

JR Raphael, Foundry

Not bad, right?!

Beyond the basic results, you can also tap the settings icon in the upper-right corner — at the right edge of the search box — to dig into Pixel Search’s settings and customize all sorts of stuff about how exactly the app looks and works. You’ve got oceans of interesting options to tweak, if you’re ever so inspired. But really, most of the default settings are perfectly fine, and you probably won’t need to do much, if any, adjusting.

Pixel Search - Android search settings
Pixel Search’s settings are full of interesting options, but you don’t have to mess with anything for an exceptional experience.

JR Raphael, Foundry

All that’s left is to add the Pixel Search widget onto your home screen for easy ongoing access, and that’s no different than adding any other Android widget:

  • Press and hold on any open space on your home screen.
  • Tap the option to add a widget. 
  • Scroll through the list of available options until you see Pixel Search.
  • Tap it, if needed, to expand its section.
  • Then either tap or press and hold the Pixel Search widget to place it wherever you like on your home screen.
Pixel Search - Android search widget
The Pixel Search widget — familiar on the surface, with so much extra power waiting inside.

JR Raphael, Foundry

Remember, too, that you can take total control of that search bar’s appearance by pokin’ around in the Pixel Search settings — if you want to add or remove extra shortcuts from it, for instance, or give it a different look in any way.

If you aren’t fond of the bar at all, you can also simply plop a shortcut to the Pixel Search app itself on your home screen — by pressing and holding its icon in your app drawer and then dragging it over to any position you like — or, if you’re really feeling wild, you can even add it as a one-tap tile in your Android Quick Settings for on-demand access from that area. (Look in the “Launch & Integration” section of the Pixel Search settings to make that addition.)

However you choose to use it, you’ll be flying around your phone like never before and finding anything you need in seconds flat.

That’s precisely how Android oughta work — and now, it’s the efficiency-enhancing experience you’ll enjoy, throughout 2026 and beyond.

Get six full days of advanced Android knowledge with my free Android Shortcut Supercourse. You’ll learn tons of time-saving tricks!

]]>
https://www.computerworld.com/article/4112883/android-search-upgrade.html 4112883Android, Google, Google Search, Mobile Apps, Productivity Software, Web Search
In the US, the death of expertise Wed, 07 Jan 2026 07:00:00 +0000

Back in 1980, science fiction and science author Isaac Asimov wrote, “There is a cult of ignorance in the United States, and there has always been. The strain of anti‑intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that my ignorance is just as good as your knowledge.” 

He didn’t know the half of it.

US President Donald J. Trump’s regime has aggressively cut federal science and technology research funding since taking office last January. Recently, one of those cuts hit home for me. NASA’s Goddard Space Flight Center has been closed, and while some of its materials will be stored, at least 85% will be thrown away

It was at Goddard in the mid-1980s that I learned I had a gift for technology, and, better still, I could explain it to other people. Put simply, that’s where my career as a tech journalist began. While there, I also got to know the engineers and scientists who’d pioneered space. This closure is a disgrace.  

I mean, who closes down a research library? (It’s not like they cost a lot of money, and libraries like this one contain a mountain of material that’s never been digitized.) The answer: an administration that has no interest whatsoever in science, knowledge, wisdom, or expertise, that’s who.

While personally painful to the people who used that library, there have been worse. NASA itself faces cuts, for example. As John Grunsfeld, an astrophysicist and astronaut who flew five shuttle missions, said: “America is stepping back from leadership in virtually every science area.… The proposal for the NASA science budget is…cataclysmic for US leadership in science.”

It’s not just NASA that’s being cut into irrelevance. The administration halted National Institutes of Health (NIH) grant reviews shortly after Trump’s inauguration and by June 2025 had canceled around 2,100 grants worth $9.5 billion.

At the same time. Robert F. Kennedy Jr., secretary of Health and Human Services, has rolled back federal support for vaccinations and fired expert advisors, such as those on the Advisory Committee on Immunization Practices — replacing them with anti-vax yes-men.  As Dr. Peter Mark, the former Federal Drug Administration (FDA) vaccine official, wrote in his resignation letter after being forced out by Kennedy: “It has become clear that truth and transparency are not desired by the secretary, but rather he wishes subservient confirmation of his misinformation and lies.” 

Looking ahead, Trump’s FY2026 “skinny budget” seeks massive reductions, including nearly 40% ($18 billion) from NIH, 57% ($5.1 billion) from the National Science Foundation (NSF), and 14% from the Department of Energy (DoE) Office of Science. Overall, the budget envisions slashing basic research by 34% and applied research by 38%, prioritizing private-sector alignment over “unfocused” federal investments. Critics warned such cuts could shrink the US gross domestic product by up to $1 trillion over a decade due to lost innovation.

That’s the rub. You see, government science research has essentially fueled the high-tech world we live and work in today. For example, you’re reading this today on the internet — the same internet that grew from ARPANET, a 1969 Department of Defense (DoD) networking experiment. 

If you’re reading this on a smartphone in an Uber, your driver is using GPS, which started as a military navigation system sponsored by the DoD, to get you to your destination. If you’ve avoided getting a bad case of Covid-19 or the flu, you can thank government labs and grants, which supported much of the basic science and early-stage work behind mRNA and other vaccine technologies. 

Heck, even artificial intelligence (AI), everyone’s current tech crush, wouldn’t be where it is today without the sustained public funding for neural network research and reinforcement learning in the 1980s and ‘90s. More recently, the NSF-led National AI Research Institutes and DARPA’s “AI Next” campaign fueled AI before every venture capitalist on the planet decided — for better or worse — to invest billions of dollars in it. 

We need federal funding for research. We need access and respect for real knowledge and expertise. Without this, we can only blindly stumble forward into the future. As John Holdren, a Harvard University physicist, put it, “The attack on science must be seen as one component of a larger attack on information, on facts, on independent analysis.” 

Whether it’s an increase in infectious diseases, such as the reemergence of measles (thanks to government-approved, anti-vax propaganda) or turning a blind eye to the speed at which AI technology is evolving, spreading and morphing — threatening to disrupt a whole host of industries and workers — or simply the loss of deep analytical knowledge, we must embrace expertise and the truth.

We cannot afford a future based on ignorance. That won’t turn out well for anyone. 

]]>
https://www.computerworld.com/article/4113421/in-the-us-the-death-of-expertise.html 4113421Artificial Intelligence, Government, Industry, Innovation, IT Management, Markets, Technology Industry
Accenture to acquire UK AI startup Faculty Tue, 06 Jan 2026 20:17:44 +0000

Accenture has agreed to acquire AI startup Faculty, a potentially significant move in a consultancy sector currently scrambling to add greater artificial intelligence expertise.

It plans to integrate Faculty’s UK-based workforce of 400 “AI native professionals” with its consulting teams, a move Accenture said will enable it to offer its customers “world‑class AI capabilities.” Accenture will also integrate Faculty’s AI decision intelligence platform, Frontier, into its services.

“With Faculty, we will further accelerate our strategy to bring trusted, advanced AI to the heart of our clients’ businesses,” Accenture CEO Julie Sweet said in the statement.

One detail that marks the acquisition as unusual is that Faculty’s current CEO, Marc Warner, will become Accenture’s chief technology officer (CTO) and join its Global Management Committee. This means that the head of a company employing a few hundred people will take a key position in a huge consulting outfit with nearly 800,000 employees worldwide.

Accenture still lists its CTO as Rajendra Prasad, who will presumably step back from this role to focus on his other day job as the company’s Group Chief Executive – Technology. CIO.com contacted Accenture and Faculty to confirm the new roles, but had no response by publication time.

AI reinvention

Traditional tech acquisitions are usually motivated by the value offered by a company’s patents, products and customers. With AI companies, just as important right now is human expertise.

Faculty offers all of these. Co-founded in 2014 as ASI Data Science by then Harvard quantum physics research fellow Warner, it was renamed Faculty in 2019. This might have been an attempt to disassociate it from allegations, which it strenuously denied, that it was part of the same internship program as scandal-hit company Cambridge Analytica, through the latter’s parent company, SCL Group.

Since then, Faculty has established a solid reputation through its work with the UK government, including the creation of an NHS Early Warning System (EWS) system used to predict hospital admissions and ventilator requirements during the Covid pandemic.

This dovetails well with Accenture’s direction; it has spent the last year undergoing an AI makeover. In June, the company folded five business units into a single division, Reinvention Services, as part of a plan to “re-invent Itself for the Age of AI.” At the same time, it started calling its employees “reinventors”.

The company has also formed alliances with OpenAI and Anthropic which will see tens of thousands of its employees trained to use and promote both companies’ chatbot and agentic technologies.

“We are writing the playbook for how to be the most AI-enabled, client-focused professional services company in the world,” said Accenture CEO Sweet in this week’s announcement of the acquisition.

This article originally appeared on CIO.com.

]]>
https://www.computerworld.com/article/4113397/accenture-to-acquire-uk-ai-startup-faculty-2.html 4113397Accenture, Artificial Intelligence, Industry, Mergers and Acquisitions, Vendors and Providers