Show Mobile Navigation
           
Technology |

10 Thrilling Developments in Computer Chips

by Charlie Parker
fact checked by Darci Heikkinen

In the 21st century, computer chips are everywhere: in our phones, our cars, our medical devices, our watches, and our personal computers/laptops. Over the last few decades, a few countries and a handful of companies have become dominant forces in chip design and manufacturing. Nations are realizing that if they don’t develop expertise in chip design and production, they’ll end up dependent on those who do. So a “chip arms race” has begun, with everyone competing to develop or secure the most advanced chip technologies.

Currently, virtually all chips are based on silicon, an element abundant in sand, though only highly purified silicon derived from quartz can be used in chip manufacturing. These chips rely on transistors to process electrical signals, with intricate copper wiring connecting components within the chip. However, new concepts and designs are emerging, with possibilities beyond traditional materials and techniques being deeply explored.

In this list, we’ll microprocess some of the most exciting data coming out of this digital arms race. We’ll shine some ultraviolet light on using light as a computing medium, and we’ll look at some of the places where the most exciting technologies are being developed. Let’s put our collective thinking caps on and see how many of these experiments truly compute.

Related: 10 Examples of Vintage Computing Still in Wide Use Today

10 Albany NanoTech: America’s First National Semiconductor Hub

Albany Nanotech Center Announced

When people think of New York State, images of Wall Street, vast apple orchards, and dairy farms usually come to mind. New York is the nation’s second-largest apple producer after Washington State. But starting in 2024, New York is set to add another feather to its cap: leadership in the semiconductor industry.

Recently, New York was chosen to host the nation’s first National Semiconductor Technology Center (NSTC), an achievement announced by Governor Kathy Hochul. This center, based at the Albany NanoTech Complex, will receive $825 million in federal funding through the CHIPS for America initiative. This investment will make New York State a leader in semiconductor research, positioning Albany’s facility as a critical player in advancing U.S. technological competitiveness and bolstering national security.

The NSTC will focus on developing cutting-edge Extreme Ultraviolet (EUV) lithography technology, enabling the creation of smaller, faster, and more energy-efficient computer chips. By reducing U.S. reliance on foreign semiconductor supply chains, New York State will become a vital hub for chip research and innovation, bringing with it numerous high-paying manufacturing jobs.[1]

9 HP’s Lab-to-Fab Silicon Device Facility Fuels Biomedical Innovation

$50M into semiconductor CHIPS funding coming to Oregon

HP’s Lab-to-Fab facility in Corvallis, Oregon, recently secured $50 million in funding through the CHIPS and Science Act. This funding will enable the facility to advance the production of cutting-edge silicon devices for research in microfluidics and microelectromechanical systems (MEMS). The project’s ultimate goal is to scale manufacturing for applications that could revolutionize the life sciences, including genetics, biology, neuroscience, and biotechnology.

The facility’s expansion will create approximately 150 construction jobs and 100 high-tech manufacturing positions, providing a boost to local employment and strengthening U.S. capabilities in advanced manufacturing. HP is collaborating with prominent research institutions, including Harvard Medical School and the CDC, to drive scientific healthcare breakthroughs. HP CEO Enrique Lores believes in the transformative potential of microfluidic silicon devices and their ability to impact both medical and semiconductor technology.[2]


8 U.S. Chip Manufacturing: TSMC’s Arizona Presence

TSMC’s Arizona Plant and US Chip Ambitions

Taiwan Semiconductor Manufacturing Company (TSMC), the world’s largest semiconductor manufacturer, recently began producing computer chips in the United States at its Phoenix, Arizona, plant. Early production yields at this facility have outperformed comparable plants in Taiwan, with the Arizona plant achieving a 4-percentage-point higher yield.

The Arizona facility, which began engineering wafer production in April 2024 using 4-nanometer technology, is set to enter full-scale production in early 2025. This milestone supports two of the U.S. government’s key goals in semiconductor manufacturing: strengthening domestic production and reducing reliance on foreign sources. TSMC is a primary chip supplier for tech giants like Nvidia and Apple. By ramping up its U.S. operations, the company is positioning itself as a top-level player in the growing U.S. semiconductor industry.

TSMC’s achievements in Arizona have allowed it to gain competitive ground on Intel and Samsung, who have encountered recent delays and financial setbacks. If TSMC can sustain this success, it could play a crucial role in helping the United States achieve its long-term goal of technological independence.[3]

7 Google Cloud Embraces ARM-Based Computer Chips

How Arm Powers Chips By Apple, Amazon, Google And More

ARM computer chips are popular with device manufacturers because they’re built to be energy-efficient. ARM stands for Advanced RISC Machines, and RISC stands for Reduced Instruction Set Computing. An instruction set is like a list of tasks that a chip can perform, and reducing its size means that the chip can focus on essential tasks. This simpler approach helps ARM chips to use less power, and this makes them ideal for applications where energy efficiency is very important. Google is now bringing this efficiency to its data centers by using ARM-based chips.

Recently, Google Cloud introduced its first ARM-based CPU, the Axion chip, which aims to make data centers more energy-efficient while lowering costs. Spotify and Paramount Global are among Google’s first Axion customers; they want to take advantage of its power efficiency for demanding tasks like Artificial Intelligence. Google has actually used ARM-based Tensor Processing Units (TPUs) for AI tasks since 2015, but Axion goes further by handling a wider range of processing applications.

Google reports that the Axion chip is about 60% more efficient than more conventional CPU designs from Intel and AMD. With Axion, Google aims to reduce its environmental footprint while delivering reliable cloud services.[4]


6 RISC-V’s RVA23 Standardization

The RISC-V News We’ve Been Waiting For: RVA23

Like ARM, RISC-V is a reduced instruction set computer chip architecture but with an open-source instruction set, meaning anyone can design and sell RISC-V chips without paying royalties. However, a challenge for RISC-V is that its open-source nature, allowing anyone to contribute ideas, can lead to fragmentation as different entities develop incompatible implementations of RISC-V. More traditional chip designs maintain strict standardization, and this standardization minimizes incompatibility risks.

RISC-V aims to address the potential for incompatibility by ratifying the RVA23 profile. This is a standardized set of instructions aimed at enhancing RISC-V’s competitiveness with highly standardized computer chip platforms like ARM (ARM company of the United Kingdom) and x86 (Intel and AMD). By establishing a unified standard, RVA23 helps reduce fragmentation within RISC-V’s open-source ecosystem.

The RVA23 profile enables energy-efficient virtualization (running one or more virtual computers on a physical computer) on RISC-V chips. This is excellent for data centers, where virtualization reduces resource use and energy costs. With RVA23, RISC-V now offers a competitive alternative to x86 and ARM that’s built on open standards.[5]

5 The Global Race for Chip Independence

The Race to Build a Perfect Computer Chip

In 2024, there’s an intense global arms race going on, but it’s nothing like the physical arms race of the 1947-1991 Cold War. Today, companies and nations are competing to produce the most powerful and most power-efficient computer chips. In fact, the future national security of nations might depend on how well they perform in this competition.

Computer chips are essential components in modern electric vehicles, AI, advanced weapons, and healthcare devices. Any nation that falls behind in chip advancements might find it hard to catch up as the rate of chip design progress accelerates. Nations are determined to develop their own chip design and manufacturing expertise instead of relying on the few countries that currently hold most of the expertise.

For example, the United States has taken measures to curb China’s access to American cutting-edge chip technology, citing security concerns and aiming to limit its rival’s technological rise. China, in response, has invested billions to strengthen its own semiconductor industry and to reduce reliance on foreign imports. Europe and the U.S. are also investing heavily in domestic chip production, aiming to lessen dependence on Taiwan, South Korea, and other major chip-producing countries.

As chips have become the “new oil” of the global economy, countries are investing in localized production to safeguard their interests in data processing, national security, and global technology.[6]


4 AMD and Intel’s Surprising Alliance

Eternal Rivals Become Best Friends, kinda

Imagine an alternate universe where Coca-Cola and Pepsi teamed up to promote cola as a concept. In our universe, this kind of team-up seems highly unlikely. However, longtime chip competitors Intel (Team Blue) and AMD (Team Red) have recently decided to join forces on x86, a computer chip architecture that Intel invented more than 45 years ago. x86 is extremely long in the tooth, and younger startup technologies are threatening to overtake it, making Intel and AMD a bit nervous about the rising competition.

The x86 architecture, released by Intel in 1978, has been the fundamental backbone of computing for decades. However, with mounting competition from ARM-based chips, the two companies have formed an x86 Ecosystem Advisory Group to unify their efforts and keep the x86 architecture relevant.

Intel and AMD hope that this alliance will improve compatibility across Intel and AMD products, making software development more straightforward and applications more reliable for data center operators. Prominent technology giants like Broadcom, Google Cloud, and Oracle have joined the advisory group, recognizing the value of a more standardized x86 platform. By streamlining and unifying their approaches to security and architectural enhancements, AMD and Intel hope to simplify infrastructure management while increasing performance for enterprises using x86 processors.

For data center operators, the collaboration brings practical benefits: easier management of mixed AMD and Intel environments, improved software efficiency, and lower operational costs. By working together, AMD and Intel are trying to future-proof x86 and hopefully hold ARM at bay for a few more decades.[7]

3 Cooling Breakthroughs for Advanced Quantum Computing

Quantum Computers Aren’t What You Think — They’re Cooler | Hartmut Neven | TED

Quantum computers need temperatures very close to absolute zero to operate properly. Absolute zero is -459.67°F (about -273.15°), so cold that all molecular motion theoretically stops. Deep space is almost this cold, and in a world (Earth) full of heat and energy, reaching such extreme cold is superhumanly challenging.

Reaching these temperatures is essential for quantum computing because it helps to stabilize qubits, the building blocks of quantum information. Traditionally, pulse tube refrigerators (PTRs) have been used to achieve these supercold temperatures, but they are slow, energy-intensive, and extremely expensive. Researchers at the National Institute of Standards and Technology (NIST) have now developed a redesigned PTR that cools to ultra-low temperatures 1.7 to 3.5 times faster than previous models.

The new PTR design includes an adjustable valve that prevents helium, the primary cooling agent, from being wasted as temperatures drop. This improvement makes the cooling process faster and more energy-efficient, potentially saving up to 27 million watts of power annually. For quantum computing facilities, setups can now be ready for experiments weeks faster. The NIST team’s breakthrough not only accelerates quantum computing but may also provide cost savings across scientific and industrial applications that rely on cryogenics.[8]


2 Light: The Key to Faster Computer Chips

The Newest Computer Chips aren’t “Electronic”

Intuitively, it feels like computer chips processing information as light should operate much faster than those relying on electrical signals moving through wires. Traditional computer chips are reaching their speed limits, and researchers are exploring light-based processing as a way to increase speeds by up to 1,000 times using photons instead of electrons.

A team at Julius-Maximilians-Universität Würzburg, in collaboration with Southern Denmark University, has developed a new approach to plasmonic resonators, “antennas for light,” which are nanoscale (extremely small) metal structures that enable interactions between light and electrons. Instead of changing the entire structure, the team adjusted only the surface of a gold nanorod resonator, allowing it to respond precisely to light frequencies. This design is similar to the Faraday cage effect, which blocks electric fields by redistributing charges on the surface to protect what’s inside.

This advancement brings us closer to “active plasmonics,” where antennas could function as ultra-fast, light-based switches in computing chips. This technology could aid fields other than computing, such as energy storage and catalysis, where precise control over electron behavior is essential.[9]

1 AlphaChip: AI-Powered Computer Chips Designing Computer Chips

Google’s AlphaChip Can Design AI Chips Now: Did We Hit Matrix-Level?

Google had an idea to test: can AI design computer chips better than humans can? Can AI design chips faster than human beings can?

Google is beginning to answer this question with AlphaChip, an AI tool developed by DeepMind to design computer chips. It works by treating the layout of a chip like a puzzle, arranging each component in the most efficient way, and improving with each attempt. This approach allows AlphaChip to create optimized chip layouts faster and more effectively than human designers.

Since its debut in 2020, AlphaChip has redefined chip design, achieving in hours what previously required weeks or even months of human effort. Its efficiency has proven invaluable in the last three generations of Google’s custom Tensor Processing Units (TPUs), the powerful chips driving AI models across data centers worldwide. By efficiently arranging components, AlphaChip reduces wire lengths and optimizes space, boosting chip performance and energy efficiency.

Companies like MediaTek appreciate what Google is doing and are following its lead. By accelerating every phase of chip design, from logic synthesis to floorplanning, AlphaChip is trekking into new territory in hardware development and inspiring fresh research across the chip design industry.[10]

fact checked by Darci Heikkinen

0 Shares
Share
Tweet
WhatsApp
Pin
Share