Breaking Data Barriers

HIGHLIGHTS

  • Integrated photonics could revolutionize the data center network by bringing optical I/O directly into the servers and onto the package, enabling low-cost and low-power optical I/O.

  • In the future, machine programming systems could democratize the creation of software, enabling anyone to develop software without writing a single line of code.

  • Through next-gen security for confidential computing, scientists are exploring federated learning to move compute to data silos, and homomorphic encryption to protect data privacy when delegating computation to the cloud.

author-image

기준

As a central aspect of its charter, Intel Labs continually engages in the development of next-generation technologies, across the spectrum from fundamental research to applied technology. In particular, this approach involves the development of collaborative research arrangements with academic and industry partners to bring nascent technical capabilities to fruition. This page highlights a few of Intel Labs’ current research areas.

Integrated Photonics

Since 2004, Intel Labs has pioneered research on silicon photonics technology, from the architecture to design to manufacturing. Silicon photonics enables faster data transfer over longer distances compared to traditional electronics by moving data with light. There are now over 4 million Intel 100G transceivers in use by customers for rack-to-rack connectivity throughout the data center.

However, the usage of silicon photonics today is limited to upper layers of the data center network where optical thrives as a medium. Intel has continued its research in silicon photonics and is looking to apply this technology to overcome power limitations inside the server itself. We call this work integrated photonics because Intel is looking at integrating optical transceivers directly onto the compute package to increase speeds of I/O buses inside the server.

To accelerate our vision, we need to reduce the current cost, physical size of the silicon photonics, and operating power. This will enable optical I/O to compete with and ultimately outperform electrical I/O over shorter distance interconnects. Intel Labs has developed the following innovations in optical technologies:

  • Integrated multi-wavelength lasers: Using a technique called wavelength division multiplexing, separate wavelengths from the same laser can convey more data in the same beam of light. This enables additional data to be transmitted over a single fiber, increasing bandwidth density.
  • Micro-ring modulators: Conventional silicon modulators take up too much area and are costly to place on IC packages. By developing micro-ring modulators, Intel has miniaturized the modulator by a factor of more than 1,000, thereby eliminating a key barrier to integrating silicon photonics onto a compute package.
  • Integrated semiconductor optical amplifiers: As the focus turns to reducing total power consumption, integrated semiconductor optical amplifiers are an indispensable technology, made possible with the same material used by the integrated laser.
  • All-silicon photodetectors: For decades, the industry has believed silicon has virtually no light detection capability in the 1.3-1.6um wavelength range. Intel showcased research that proves otherwise. Lower cost is one of the main benefits of this breakthrough.
  • Package integration: By tightly integrating silicon photonics and CMOS silicon through advanced packaging techniques, we can gain three benefits: lower power, higher bandwidth, and reduced pin count. Intel is the only company that has demonstrated integrated multi-wavelength lasers and semiconductor optical amplifiers, all-silicon photodetectors, and micro-ring modulators on a single technology platform tightly integrated with CMOS silicon. This research breakthrough paves the path for scaling integrated photonics.

Intel Labs’ innovations in integrated photonics continue our vision of bringing optical I/O to servers. This vision opens up new workload possibilities and can revolutionize data center architectures, such as future disaggregated architectures. This would allow multiple functional blocks, such as compute, memory, and peripheral, to be spread throughout the entire system and interconnected via optical and software in high-speed networks.

Machine Programming

The field of machine programming (MP), which is the automation of the development of software, is making notable research advances. This is, in part, due to the emergence of a wide range of novel techniques in machine learning.

In today’s technological landscape, software is integrated into almost everything we do – but maintaining software is a time-consuming and error-prone process. When fully realized, machine programming will enable anyone to express their creativity and develop their own software without writing a single line of code.

Intel realizes the pioneering promise of machine programming, which is why we created the Machine Programming Research (MPR) team in Intel Labs. The MPR team’s goal is to create a society where anyone can create software, but machines will handle the programming part.

The field of machine programming is driven by three pillars — intention, invention, and adaptation — which provide the conceptual framework for Intel Labs’ numerous MP research advances. While fully automated machine programming systems may be more than two decades away, the MPR team is advancing research today under the three pillars:

  • Intention: Discover the intent of a programmer or user using a variety of expression techniques.
  • Invention: Create new algorithms and data structures, and lift semantics from existing code.
  • Adaptation: Evolve software in a dynamically changing hardware/software world.

Intention focuses on simplifying the interface between the human and the MP system, finding new ways for humans to express ideas to the machine. The MP system would meet human programmers on their terms, instead of forcing them to express code in computer/hardware notations.

Invention emphasizes machine systems that create and refine algorithms, or the core hardware and software building blocks from which systems are built. This pillar focuses on implementation at a higher order around the data structures and algorithms required to create a particular program.

Adaptation is all about making fine-tuned adjustments of a given program to execute on a specific set of constraints, such as specialized hardware or a particular software platform. Adaptation focuses on automated tools that help software adapt to changing conditions, such as bugs or vulnerabilities found in an application or new hardware system.

In the future, these machine programming systems could democratize the creation of software.

Next-Gen Security

Next-gen security is a critical research area at Intel Labs. Encryption is used as a solution to protect data while it’s being sent across the network and also while it is stored, but data can still be vulnerable when it is being processed or used. Confidential computing is a new form of computing that allows securing data in use.

Today, Intel Labs has contributed one of Intel’s most well researched and tested trusted execution environments available for data centers — Intel SGX software guard extensions. In looking toward tomorrow, we are exploring federated learning to break the data silo barrier. In the future, Intel Labs is working to expand confidential computing through homomorphic encryption.

Through bypassing a system’s operating system and virtual machine software layers, the trusted execution environment provides significant additional protection against many attacks. It provides a hardware-based security solution that utilizes encryption to change how memory is accessed, providing enclaves of protected memory to run your application and its data.

Trusted execution environments open up new possibilities in areas such as multi-party collaboration while helping to maintain data privacy and regulatory compliance.

In many industries such as retail, manufacturing, healthcare and financial services, the largest datasets are locked up in what are called data silos. These data silos may exist to address privacy concerns or regulatory challenges, or in some cases the data is just too large to move. However, these data silos create obstacles when using machine learning tools to gain valuable insights from the data.

Intel Labs is researching new ways to use federated learning to address data silo challenges. For example, a research hospital may have data for several thousand patients with a certain medical condition. But that’s not enough to train a modern algorithm and privacy regulations may not allow for sharing the data. By using federated learning in a trusted execution environment, many hospitals can contribute their data to train the model while maintaining patient privacy from the other institutions.

Intel Labs and the Perelman School of Medicine at the University of Pennsylvania co-developed technology to train artificial intelligence (AI) models to identify brain tumors using the federated learning technique. Researchers trained a brain tumor segmentation model, which predicts tumor locations in MRI scans, using data from 10 different medical institutions, and compared the accuracy of federated learning to centralized learning using the same data. The study showed that federated learning achieved equivalence, with greater than 99% matching accuracy to centralized learning. Institutions did 17% better when trained in the federation compared to training with only their own local data.

Fully homomorphic encryption is a new cryptosystem that allows applications to perform computation on encrypted data, without exposing the data itself. At Intel Labs, the technology is emerging as a leading method to protect privacy of data when delegating computation to the cloud. For example, these cryptographic techniques allow cloud computation directly on encrypted data, without the need for trusting the cloud infrastructure, cloud service, or other tenants.

Traditional cryptography requires the cloud server to have access to the secret key to unlock the data for processing purposes. Homomorphic encryption simplifies and secures this process by allowing the cloud to perform computations on ciphertext or the encrypted data, and then return those encrypted results to the owner of the data.

However, there are challenges that hinder the adoption of fully homomorphic encryption.

With traditional encryption mechanisms to transfer and store data, the overhead is negligible. With fully homomorphic encryption, the size of homomorphic ciphertext is significantly larger than plain data. This data explosion then leads to a compute explosion. The processing overhead increases, not only from the size of the data, but also from the complexity of the computations. To overcome these challenges, Intel Labs is currently investigating new hardware and software approaches, and engaging with the ecosystem and standards.