electrical engineering /ecee/ en Meet the Emmy-winning engineer whose algorithms are behind your Netflix binge /ecee/meet-emmy-winning-engineer-whose-algorithms-are-behind-your-netflix-binge <span>Meet the Emmy-winning engineer whose algorithms are behind your Netflix binge</span> <span><span>Charles Ferrer</span></span> <span><time datetime="2026-04-21T08:32:51-06:00" title="Tuesday, April 21, 2026 - 08:32">Tue, 04/21/2026 - 08:32</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/ecee/sites/default/files/styles/focal_image_wide/public/2026-04/Alan_Bovik_ECEE_Thumbnail.jpg?h=5259405d&amp;itok=D4YBZXz5" width="1200" height="800" alt="Al Bovik Thumbnail"> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/ecee/taxonomy/term/52"> News </a> </div> <div role="contentinfo" class="container ucb-article-tags" itemprop="keywords"> <span class="visually-hidden">Tags:</span> <div class="ucb-article-tag-icon" aria-hidden="true"> <i class="fa-solid fa-tags"></i> </div> <a href="/ecee/taxonomy/term/204" hreflang="en">electrical engineering</a> </div> <a href="/ecee/charles-ferrer">Charles Ferrer</a> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default"> <div class="ucb-article-text" itemprop="articleBody"> <div> <div class="align-right image_style-medium_750px_50_display_size_"> <div class="imageMediaStyle medium_750px_50_display_size_"> <img loading="lazy" src="/ecee/sites/default/files/styles/medium_750px_50_display_size_/public/2026-04/Alan_Bovik_ECEE_Portraits_20260409_JMP_009.jpg?itok=Ksif89gA" width="750" height="1125" alt="Al Bovik Portrait"> </div> <span class="media-image-caption"> <p><span>Photo Credit: Jesse Petersen</span></p> </span> </div> <p dir="ltr"><span>Every time you hit play on a video, chances are you have Al Bovik to thank for its visual quality.</span><br><br><span>Bovik, professor and Provost’s Chair in Engineering in the&nbsp;</span><a href="/ecee/" rel="nofollow"><span>Department of Electrical, Computer and Energy Engineering</span></a><span>, has spent decades developing algorithms that now influence nearly 80% of internet and social media content.</span><br><br><span>At the center is digital visual perception, or using the neuroscience of human vision to make streamed video look as sharp and natural as possible. His work is used by familiar brands like Netflix, Amazon and YouTube.</span><br><br><span>Understanding not just how cameras capture patterns of light, Bovik explains, but how the brain interprets it is an important element that drives his research.&nbsp;</span><br><br><span>“The question that really gripped me over time was: can we model mathematically how we see?” Bovik says. “That’s a very different and much harder problem.”</span><br><br><span>His achievements in visual perception processing has landed him two Emmys: a Primetime Emmy Engineering Award for&nbsp;</span><a href="https://www.televisionacademy.com/features/news/awards-news/2015-engineering-emmys-celebrate-technical-achievements" rel="nofollow"><span>Outstanding Achievement in Engineering Development</span></a><span> and a Technology and Engineering Emmy from the Academies of Television Arts and Sciences. They also earned him the IEEE Edison Medal, which he shares with Alexander Graham Bell, Nikola Tesla and Ray Dolby.</span><br>&nbsp;<br>We sat down with Bovik to discuss his career, the neuroscience hiding behind your favorite TV or movie and why his proudest achievement isn’t just theories and algorithms.<br>&nbsp;<br><span><strong>For someone outside the field, how would you describe what digital processing is?</strong></span><br><br><span>At its simplest, image processing is about manipulating visual information using computations. Digital processing involves inventing theories and algorithms to help make television and movies more efficient, faster and higher quality. What I do is more than that. It is modeling the visual parts of the brain mathematically, then using those models to create algorithms for better photography, TV shows and movies.</span></p><p dir="ltr"><span><strong>What first drew you toward the field of digital processing?</strong></span><br><br><span>I’m a deeply visual person. Whenever I travel, the first place I go is an art museum. If I go a week without seeing a movie, I go into withdrawal. I’m a visual, spatial thinker and suddenly here was a field that lived at the intersection of mathematics and how we see the world. Then I took an image processing class from Thomas Huang, one of the inventors of image compression, and everything changed overnight.&nbsp;I knew immediately: This is what I want to do. I’ve never looked back.</span><br><br><span><strong>What does the science of human vision reveal about how we see digital content?</strong></span><br><br><span>We know that image processing happens in various brain centers, including the primary visual cortex&nbsp;—&nbsp;the very back of the brain. Vision requires processing an enormous amount of raw information, compressing it into concise, efficient representations that the brain can use to recognize a car on the highway or track a bird in flight. We can model that mathematically and start exploring questions like why do we look where we look, or where does your gaze land when you’re driving? The same holds true in videos&nbsp;—&nbsp;your eyes are directed to certain areas when viewing a particular scene.</span><br><br><span><strong>What are the acclaimed algorithms that you innovated, ones people don’t necessarily notice?</strong></span><br><br><span>We created a variety of algorithms used throughout the streaming and social media industries. These algorithms use mathematical models of how visual distortions are perceived in the human brain, using them to predict how a human will rate the visual quality of a picture or video. For example, they are widely used to control the compression of television and movies streamed worldwide. Compression is necessary since videos are huge and would not be practically streamable otherwise. One of them, called structural similarity (SSIM), allows the big streamers and social media platforms to compress content as much as possible to the point just before noticeable distortions appear. Engineers at companies like Netflix, Meta Platforms, Amazon and YouTube use this technology.</span></p><p dir="ltr"><span><strong>Can you walk us through what’s happening technically when someone presses play on Netflix?</strong></span><br><br><span>Let’s say you’re watching&nbsp;Stranger Things. The moment you start a scene, up in the cloud, approximately 20 different versions of that scene have already been prepared, each compressed a different amount, each perceptually optimized using our algorithms. Some are also spatially downsampled: A 4K video might have versions encoded at 2K or even lower resolution.</span><br><br><span>Your device, whether it’s on your phone or TV, measures the available bandwidth, which changes constantly, especially if someone is on the move in a city with tall buildings and requests whichever of those 20 versions best fits your current conditions. This happens scene by scene, continuously.</span><br><br><span>Here’s the part that surprises most people: You might think you’re watching 4K, but if your bandwidth is constrained, you might actually be receiving a heavily compressed 2K version that’s been decompressed and upsampled back to 4K on your TV. Visually, you can’t tell the difference because of our video quality algorithms.&nbsp;</span></p><p dir="ltr"><span><strong>Your algorithms also help determine how much video can be compressed before viewers notice a difference. How does that work?&nbsp;</strong></span><br><br><span>Another algorithm we developed, called visual information fidelity, or VIF, predicts how a person will perceive the quality of a video after it has been compressed. It tells the Netflix video quality system the point where distortions may be visible. Netflix’s video streaming is built on these neuroscience principles and sometimes I say that they have now become a visual neuroscience company.</span></p> <div class="align-right image_style-medium_750px_50_display_size_"> <div class="imageMediaStyle medium_750px_50_display_size_"> <img loading="lazy" src="/ecee/sites/default/files/styles/medium_750px_50_display_size_/public/2026-04/_MG_0916.JPG?itok=6Ivt6JNr" width="750" height="500" alt="Al Bovik 2025 Emmy"> </div> <span class="media-image-caption"> <p><span>Professor Al Bovik and two former PhD students at the 2015 Primetime Emmy Engineering Awards ceremony.&nbsp;</span></p> </span> </div> <p dir="ltr"><span><strong>How did your first successful model, structural similarity, come about?&nbsp;&nbsp;</strong></span><br><br><span>Almost by accident, honestly. My students and I were working on video compression, and we ran into a fundamental problem: How do you even measure whether your results are good? How does a human perceive the quality of a picture? Nobody had really solved that, and most researchers thought it was unsolvable. So we built our own model. We were amazed when the entire television industry noticed and adopted it. The streaming world discovered it early while they were wrestling with the question of how much to compress video before it starts looking distorted to a viewer. This was especially important since the new wireless/smartphone systems had very limited bandwidth. SSIM gave them a way to find that compression point and deliver perceptually compressed videos to everyone. Every photo uploaded to Facebook, Instagram, WhatsApp or Reels is now optimized using a model rooted in visual neuroscience. We had successfully introduced the principles of visual neuroscience throughout the internet.&nbsp;</span><br><br><span><strong>You’ve also worked with Meta for nearly a decade on virtual and augmented reality. What does that world look like?</strong></span><br><br><span>It’s one of the most exciting problems I’ve worked on. Imagine wearing advanced AR glasses here in Colorado, while your colleague is wearing a similar pair in Paris. You can see each other in 3D, in real time, as if you’re in the same room. The challenge is that the display is an inch from your eye, so you need a far denser resolution, perhaps 8K or 16K, which means vastly more data to compress and transmit. Our approach is the avatar model: rather than sending a live 3D video feed, you build a photo realistic 3D model of the person which is stored on your friend’s AR glasses, and only transmit their facial movements determined by cameras and image processing in your own glasses, which requires far less bandwidth. The 3D avatar is animated in real time on the receiving end.</span><br><br><span><strong>What are you most proud of during your teaching career and working partnering with some of the largest digital giants?&nbsp;</strong></span><br><br><span>I ask myself, “Am I giving my students the best possible opportunities?” My students are not just programmers, and they’re not just video engineers. They’re also trained as visual psychologists and neuroscientists. The thing I’m most proud of is the successes of my students. The Netflix video team is largely composed of students from our&nbsp;</span><a href="/lab/live/" rel="nofollow"><span>Laboratory for Image and Video Engineering (LIVE)</span></a><span>. What matters most to me are the people who came through this lab and went on to shape an industry. No less than six of my students have Emmy statuettes on their shelves at work or home. If I were to ask myself why I’m here at 91ý, that’s the answer, along with living in the Colorado mountains!</span><br><br><span><strong>What’s an aspect that people may not realize about your work in image processing?&nbsp;</strong></span><br><br><span>The internet now accounts for nearly 10% of global carbon emissions, and that’s growing fast. Our algorithms help reduce internet video data volume, which is 80% of internet traffic, by nearly 25%. By reducing the amount of data moving through global networks, we are shaving off a meaningful fraction of that footprint, and the ecological impact is real.</span><br><br><span><strong>Burning question: Do you have a favorite movie and show that has used your algorithm?&nbsp;</strong></span><br><br><span>Pretty much any movie or TV show I watch will be processed by these algorithms. These would include British Mystery shows like Broadchurch, Grace and Prime Suspect, which my wife and I watch all the time, and movies with great acting, cinematography and directing, like&nbsp;The Godfather,&nbsp;2001: A Space Odyssey,&nbsp;Blade Runner, Spartacus, Gladiator and many more. This year, I especially liked&nbsp;Sinners and&nbsp;One Battle After Another.</span></p></div> </div> </div> </div> </div> <div>Two-time Emmy‑winning electrical engineer Al Bovik shares how his algorithms shape the visual quality of nearly 80% of streamed video worldwide. By combining neuroscience with engineering, his work impacts some of the largest digital platforms behind your TV or movie binge. </div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Tue, 21 Apr 2026 14:32:51 +0000 Charles Ferrer 2834 at /ecee 91ý to host International Workshop on Biodesign Automation this June /ecee/cu-boulder-host-international-workshop-biodesign-automation-june <span>91ý to host International Workshop on Biodesign Automation this June</span> <span><span>Charles Ferrer</span></span> <span><time datetime="2026-04-08T09:55:56-06:00" title="Wednesday, April 8, 2026 - 09:55">Wed, 04/08/2026 - 09:55</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/ecee/sites/default/files/styles/focal_image_wide/public/2026-04/synthetic%20biology.jpg?h=287a424d&amp;itok=Pxh_XKNA" width="1200" height="800" alt="synthetic biology"> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/ecee/taxonomy/term/52"> News </a> </div> <div role="contentinfo" class="container ucb-article-tags" itemprop="keywords"> <span class="visually-hidden">Tags:</span> <div class="ucb-article-tag-icon" aria-hidden="true"> <i class="fa-solid fa-tags"></i> </div> <a href="/ecee/taxonomy/term/164" hreflang="en">biomedical</a> <a href="/ecee/taxonomy/term/155" hreflang="en">computer engineering</a> <a href="/ecee/taxonomy/term/204" hreflang="en">electrical engineering</a> </div> <a href="/ecee/charles-ferrer">Charles Ferrer</a> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default"> <div class="ucb-article-text" itemprop="articleBody"> <div><p dir="ltr"><span>The University of Colorado Boulder will host the 18th annual&nbsp;</span><a href="https://www.iwbdaconf.org/" rel="nofollow"><span>International Workshop on Biodesign Automation&nbsp;</span></a><span>(IWBDA) on June 18-20. IWBDA will be held immediately following the Synthetic Biology: Engineering, Evolution &amp; Design (SEED) Conference, which will be held in Denver from June 15-18.</span></p> <div class="align-right image_style-medium_750px_50_display_size_"> <div class="imageMediaStyle medium_750px_50_display_size_"> <img loading="lazy" src="/ecee/sites/default/files/styles/medium_750px_50_display_size_/public/2025-02/ECEE_SPUR_Synthetic_Biology_Lab_2024_00002.JPG?itok=pxuvXD0C" width="750" height="500" alt="Synthetic Biology lab long"> </div> <span class="media-image-caption"> <p><em>Graduate and undergraduate students at a synthetic biology outreach event led by the Genetic Logic Lab at 91ý.&nbsp;</em></p> </span> </div> <p dir="ltr"><span>“Hosting IWBDA is a great opportunity for our faculty and students to engage with world-class researchers and industry leaders in the emerging field of synthetic biology,” said </span><a href="/ecee/chris-myers" rel="nofollow"><span>Chris Myers</span></a><span>, department chair of electrical, computer and energy engineering. “We look forward to forming new collaborations that will move this exciting field forward.”</span></p><p dir="ltr"><span>Synthetic biology involves redesigning organisms for useful purposes by engineering them to have new abilities. Scientists around the world are harnessing synthetic biology to solve the pressing problems in medicine, manufacturing and agriculture.</span><br><br><span>For example, microorganisms can be engineered to clean pollutants from water, soil and air that are essential in the fight against environmental contamination. In agriculture, scientists have modified rice to produce beta-carotene, a nutrient typically associated with carrots, helping to prevent vitamin A deficiency in populations that rely heavily on rice as a dietary staple.</span><br><br><span>However, synthetic biology faces a significant challenge where the field has lagged behind other industries when it comes to adopting computational and digital solutions. Unlike software engineering, where standardized tools and workflows are common, biological systems are highly complex and variable. A solution that works for one organism or process often must be completely redesigned for another.</span><br><br><span>This is where biodesign automation (BDA) comes in. BDA applies the principles of engineering and computer science to streamline and accelerate biological research and development.&nbsp;</span><br><br><span>By developing innovative software tools, standardized components and automated workflows, researchers aim to make synthetic biology faster, more reproducible and accessible.&nbsp;</span><br><br><span>IWBDA pushes the mission forward bringing synthetic biology, systems biology and design automation communities together for stronger collaboration.&nbsp;</span><br><br><span><strong>What to expect at the SEED conference</strong></span><br><br><a href="https://synbioconference.org/2026" rel="nofollow"><span>Synthetic Biology: Engineering, Evolution &amp; Design</span></a><span> (SEED) is the premier technical conference for the synthetic biology community, serving as a global venue to share transformative breakthroughs across academia and industry.&nbsp;</span></p><div class="feature-layout-callout feature-layout-callout-medium"><div class="ucb-callout-content"><p><i class="fa-solid fa-microscope fa-2x">&nbsp;</i>&nbsp;<strong>Attending SEED 2026</strong><br><br><span><strong>Who: </strong>Open to the public</span><br><span><strong>When:</strong> Monday, June 15- Thursday, June 18</span><br><span><strong>Where: </strong>Hyatt Regency Denver at Colorado Convention Center</span><br><span><strong>Registration: </strong></span><a href="https://synbioconference.org/2026" rel="nofollow"><span>Required</span></a></p></div></div><p dir="ltr"><span>The conference highlights how advances such as artificial intelligence and biological engineering are accelerating the field faster than ever.</span><br><br><span>Covering synthetic biology from its scientific foundations to its commercial applications, SEED offers attendees insight into development strategies from leaders in research, biomanufacturing and product innovation.&nbsp;</span><br><br><span>Whether participants focus on research and development, commercialization or bringing discoveries into real-world impact, SEED provides significant networking opportunities for those engaged in the synthetic biology community. &nbsp;</span><br><br><span>By attending both SEED and IWBDA, participants gain an opportunity to engage in technical workshops, as well as hands-on design automation strategies for individuals in research, academic and industry.</span><br><br><span><strong>Get the scoop about IWBDA 2026</strong></span><br><br><span>IWBDA aims to bring academic researchers and industry partners together to lead the field of biodesign automation for synthetic biology forward.</span></p><div class="feature-layout-callout feature-layout-callout-medium"><div class="ucb-callout-content"><p><i class="fa-solid fa-flask-vial fa-2x">&nbsp;</i>&nbsp;<strong>Attending IWBDA 2026</strong><br><span><strong>Who: </strong>Researchers, faculty, students, industry</span><br><span><strong>When: </strong>Thursday, June 18 to Saturday, June 20</span><br><span><strong>Where: </strong>KOBL 352 / ECCS 201</span><br><span><strong>Registration: </strong></span><a href="https://www.iwbdaconf.org/" rel="nofollow"><span>Required</span></a></p></div></div><p dir="ltr"><span>This year’s&nbsp;</span><a href="https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.iwbdaconf.org%2F&amp;data=05%7C02%7CCharles.Ferrer%40colorado.edu%7Ce019b6ab03744f2f4d8808de9024a7fc%7C3ded8b1b070d462982e4c0b019f46057%7C1%7C0%7C639106684282057855%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=xjNfmJOIX%2F%2FvEF0UJ1URpsZ1Jq760edZDeGGizHsips%3D&amp;reserved=0" rel="nofollow"><span>IWBDA workshop</span></a><span>, led by the&nbsp;</span><a href="/ecee/" rel="nofollow"><span>Department of Electrical, Computer &amp; Energy Engineering (ECEE)</span></a><span>, takes place immediately following the SEED conference, held in Boulder which is less than 40 miles from Denver, making the two events a natural pairing for attendees traveling to Colorado for the week.</span><br><br><span>IWBDA will include presentations and poster talks selected from submitted abstracts, Birds of a Feather discussions and interactive breakout sessions.&nbsp;</span></p><p dir="ltr"><span>Topics will span artificial intelligence and machine learning in synthetic biology, biosecurity considerations in lab automation, the growing role of biofoundries, computer-aided design tools and synthetic biology education and outreach.</span><br><br><span>Keynote speakers include Dr.&nbsp;</span><a href="https://meche.mit.edu/people/faculty/ddv@MIT.EDU" rel="nofollow"><span>Domitilla Del Vecchio</span></a><span> of MIT and Dr.&nbsp;</span><a href="https://bme.duke.edu/people/emma-chory/" rel="nofollow"><span>Emma J. Chory</span></a><span> of Duke University, both prominent researchers in the intersection of engineering and biological sciences.</span><br><br><span><strong>Hands-on tutorials</strong></span></p><div class="feature-layout-callout feature-layout-callout-medium"><div class="ucb-callout-content"><p><i class="fa-solid fa-keyboard fa-2x">&nbsp;</i>&nbsp;<strong>Attending IWBDA Tutorials</strong><br><br><span><strong>Who: </strong>Researchers, faculty, students, industry&nbsp;</span><br><span><strong>When:</strong> Saturday, June 13 to Sunday, June 14</span><br><span><strong>Where: </strong>KOBL 352&nbsp;</span><br><span><strong>Registration:</strong> Required (</span><a href="https://www.iwbdaconf.org" rel="nofollow"><span>fee can be waived for CU students</span></a>)</p></div></div><p dir="ltr"><span>For those who want to dive deeper before the main workshop, IWBDA tutorials will be held June 13-14 in Boulder.&nbsp;</span><br><br><span>These two days hands-on sessions are designed to give faculty, researchers, industry members and students practical experience with synthetic biology software tools and to close the gap between tool developers and experimental biologists.</span><br><br><span>Parallel tracks will be offered for both users and developers, allowing attendees to tailor their experience to their skill level and interests.</span><br><br><span>The user track will guide participants through a complete synthetic biology workflow using open-source tools, while the developer track will introduce libraries and resources for building standard-enabled synthetic biology software.&nbsp;</span></p></div> </div> </div> </div> </div> <div>91ý will host the 18th International Workshop on Biodesign Automation (IWBDA), June 18–20, following the SEED Conference in Denver. The workshop brings together researchers and industry leaders advancing biodesign automation in synthetic biology.<br> </div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/ecee/sites/default/files/styles/large_image_style/public/2026-04/synthetic%20biology.jpg?itok=vdwaSnwJ" width="1500" height="540" alt="synthetic biology"> </div> </div> <div>On</div> <div>White</div> Wed, 08 Apr 2026 15:55:56 +0000 Charles Ferrer 2821 at /ecee Scientists harness AI to reveal forces behind glacier surges /ecee/scientists-harness-AI-reveal-forces-behind-glacier-surges <span>Scientists harness AI to reveal forces behind glacier surges</span> <span><span>Charles Ferrer</span></span> <span><time datetime="2026-03-05T15:12:42-07:00" title="Thursday, March 5, 2026 - 15:12">Thu, 03/05/2026 - 15:12</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/ecee/sites/default/files/styles/focal_image_wide/public/2026-02/Negribreen%20surge%202017.JPG?h=258ff3ec&amp;itok=wSWcX9hh" width="1200" height="800" alt="Negribreen glacier surge 2017"> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/ecee/taxonomy/term/52"> News </a> </div> <div role="contentinfo" class="container ucb-article-tags" itemprop="keywords"> <span class="visually-hidden">Tags:</span> <div class="ucb-article-tag-icon" aria-hidden="true"> <i class="fa-solid fa-tags"></i> </div> <a href="/ecee/taxonomy/term/238" hreflang="en">AI</a> <a href="/ecee/taxonomy/term/38" hreflang="en">Research</a> <a href="/ecee/taxonomy/term/204" hreflang="en">electrical engineering</a> </div> <a href="/ecee/charles-ferrer">Charles Ferrer</a> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default"> <div class="ucb-article-text" itemprop="articleBody"> <div> <div class="align-right image_style-medium_750px_50_display_size_"> <div class="imageMediaStyle medium_750px_50_display_size_"> <img loading="lazy" src="/ecee/sites/default/files/styles/medium_750px_50_display_size_/public/2026-03/Negribreen%20Glacier%20System%20Airborne%20Geophysical%20Campaign_0.JPG?itok=8ujaDPlX" width="750" height="491" alt="Negribreen 2019 campaign"> </div> <span class="media-image-caption"> <p>Ute Herzfeld (PI), Harald Sandal (pilot), Gustav Svanstroem (helicopter technician) and Matthew Lawson (research assistant) during the&nbsp;Negribreen Glacier System Airborne Geophysical campaign (Photo Credit: Thomas Trantow).&nbsp;<br>&nbsp;</p> </span> </div> <p dir="ltr"><span>Glaciers are constantly changing and reshaping the Earth’s surface.&nbsp;</span><br><br><span>91ý researchers have developed a new machine learning tool to better understand how Arctic glaciers suddenly accelerate or “surge”. &nbsp; &nbsp;</span><br><br><span>The team, led by&nbsp;</span><a href="/ecee/ute-herzfeld" rel="nofollow"><span>Ute Herzfeld</span></a><span>, a research professor in the Department of Electrical, Computer and Energy Engineering,&nbsp;created an open-source cyberinfrastructure called GEOCLASS-image, designed to decode the physical processes behind glacier motion using high-resolution satellite imagery and machine learning.&nbsp;</span><br><br><span>Glacier surges are sudden bursts of movement in otherwise slow-flowing ice.&nbsp;</span><br><br><span>Normally, glaciers move at a steady pace, but during a rare “surge”, that rate can accelerate up to 200 times faster than usual. The ice fractures into deep crevasses and pushes large volumes of ice toward the ocean. These dramatic events provide scientists with new insight into the unpredictable drivers of sea-level rise. &nbsp;</span><br><br><span>“Most deep machine learning systems don’t know what to look for in images,” said Herzfeld, who is also the director of the Geomathematics, Remote Sensing and Cryospheric Sciences Laboratory. “We have built a system that understands the physics of ice deformation, so the classifications actually mean something.”</span><br><br><span><strong>Understanding how a glacier surges</strong></span></p><p dir="ltr"><span>Unlike traditional artificial intelligence systems that often struggle to interpret complex natural phenomena, the team created a new neural network approach—VarioCNN—to better understand glacial acceleration.</span><br><br><span>“Surging glaciers are one of the deep uncertainties in sea-level rise projections,” Herzfeld said. “They can move much faster than normal and current earth system models do not yet have the ability to account for them.”</span><br><br><span>To tackle this problem, Herzfeld and her team merged two powerful approaches: a deep convolutional neural network (CNN), common in the field of computer science and remote sensing and a physics-informed neural network model that captures how crevasses in the ice form, widen and intersect during motion.&nbsp;</span><br><br><span>“Think of neural networks as Lego blocks,” Herzfeld said. “We’ve taken some from physically informed models, some from deep learning and built a new kind of AI that’s meaningful.”</span><br><br><span><strong>Putting AI to the test&nbsp;</strong></span><br><br><span>The team tested their approach on a real-world event: the unexpected 2016 surge of Negribreen, a glacier located in the Arctic archipelago of Svalbard a 1,000 km south of the North Pole.&nbsp;</span></p><div class="feature-layout-callout feature-layout-callout-medium"><div class="ucb-callout-content"><p class="text-align-right"><i class="fa-solid fa-quote-left">&nbsp;</i>This isn’t just another AI model but one that understands the physics of glacial acceleration.<i class="fa-solid fa-quote-right">&nbsp;</i><br>~Ute Herzfeld</p></div></div><p dir="ltr"><span>Using Maxar WorldView satellite imagery collected in 2016-2018, the researchers tracked subtle changes across the glacier’s surface with remarkable detail.</span><br><br><span>They discovered that crevasse patterns, which change dramatically during a surge, hold information about surge dynamics that can be retrieved using their neural network approach.&nbsp;&nbsp;</span><br><br><span>One-dimensional crevasses appeared at the leading edge of the surge, while deeper within the surge area, complex patterns tell the story of the transformation and deformation of the ice, which can be of use in numerical modeling of the glacial acceleration.&nbsp;</span><br><br><span>Shear, a type of deformation that plays a key role in glacial acceleration, is easily misclassified in deep learning, but correctly identified using VarioCNN.</span><br><br><span>With their new VarioCNN model, they classified different types of crevasses from satellite images and used those patterns to interpret how the glacier moved and changed.</span><br><br><span>Results of the classification were then used to understand how the surge expanded and affected the entire Negribreen glacier system. Ultimately, ice mass equivalent to 1% of global annual sea-level rise transferred to the ocean.</span><br><br><span>Published in&nbsp;</span><a href="https://www.mdpi.com/2072-4292/16/11/1854" rel="nofollow"><span>Remote Sensing</span></a><span>, their results demonstrated how integrating physical knowledge into a neural network model, carried out at the computational level, can advance machine learning and glaciological understanding of glacier surges. The paper was selected as the cover story of Remote Sensing receiving record downloads during the first two weeks after publication.</span></p> <div class="align-right image_style-medium_750px_50_display_size_"> <div class="imageMediaStyle medium_750px_50_display_size_"> <img loading="lazy" src="/ecee/sites/default/files/styles/medium_750px_50_display_size_/public/2026-02/Negribreen_0.JPG?itok=vpiLm5YF" width="750" height="497" alt="Negribreen 2017"> </div> <span class="media-image-caption"> <p><span>Student Connor Meyers setting up a GPS station at the edge of Negribreen (Photo Credit: Ute Herzfeld).&nbsp;</span></p> </span> </div> <p dir="ltr"><span>“The problem of task-oriented machine learning is especially intriguing to me,” said Silas Twickler (Phys’25) who was a research assistant on the project. “While simply applying pre-existing neural networks may be sufficient for certain applications, the augmentation of these networks can allow for a drastic improvement in machine learning.”</span></p><p dir="ltr"><span><strong>AI for the geosciences&nbsp;</strong></span><br><br><span>A major hurdle in applying machine learning to studying glaciers is the limited amount of labeled data.&nbsp;To overcome this, Herzfeld’s team developed a way that allows scientists to gradually refine the model using a relatively small number of hand-labeled satellite images.&nbsp;</span><br><br><span>VarioCNN was trained on just a few thousand of examples, far fewer than the 100,000 images than typical deep learning models require. Due to its modular design, the GEOCLASS cyberinfrastructure can be adapted to study other geophysical processes and potentially surfaces of other planets.</span><br><br><span>“Our tool is not just for glaciologists, but for anyone working with remote sensing and physical systems,” Herzfeld said. “Ultimately, we hope to give scientists better tools to understand how the Earth is changing.”&nbsp;</span><br><br><em><span>This research was funded by the National Science Foundation Office of Advanced Cyberinfrastructure and NASA Earth Sciences Division.</span></em></p></div> </div> </div> </div> </div> <div>Glaciers are constantly changing and reshaping the Earth’s surface.&nbsp;91ý researchers have developed a new machine learning tool to better understand how Arctic glaciers suddenly accelerate or “surge”. </div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/ecee/sites/default/files/styles/large_image_style/public/2026-02/Negribreen%20surge%202017.JPG?itok=9uU4WNVN" width="1500" height="504" alt="Negribreen glacier surge 2017"> </div> </div> <div>On</div> <div>White</div> <div>Negribreen glacier during an ice surge in 2017 (Credit: Ute Herzfeld).</div> Thu, 05 Mar 2026 22:12:42 +0000 Charles Ferrer 2813 at /ecee Researchers build ultra-efficient optical sensors shrinking light to a chip /ecee/researchers-build-ultra-efficient-optical-sensors-shrinking-light-chip <span>Researchers build ultra-efficient optical sensors shrinking light to a chip</span> <span><span>Charles Ferrer</span></span> <span><time datetime="2026-02-23T09:37:42-07:00" title="Monday, February 23, 2026 - 09:37">Mon, 02/23/2026 - 09:37</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/ecee/sites/default/files/styles/focal_image_wide/public/2026-02/Bright%20Lu%20headshot_0.jpeg?h=bde246bb&amp;itok=bcWVALQ3" width="1200" height="800" alt="Bright Lu headshot"> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/ecee/taxonomy/term/52"> News </a> </div> <div role="contentinfo" class="container ucb-article-tags" itemprop="keywords"> <span class="visually-hidden">Tags:</span> <div class="ucb-article-tag-icon" aria-hidden="true"> <i class="fa-solid fa-tags"></i> </div> <a href="/ecee/taxonomy/term/18" hreflang="en">Graduate Students</a> <a href="/ecee/taxonomy/term/203" hreflang="en">Photonics</a> <a href="/ecee/taxonomy/term/38" hreflang="en">Research</a> <a href="/ecee/taxonomy/term/204" hreflang="en">electrical engineering</a> </div> <a href="/ecee/charles-ferrer">Charles Ferrer</a> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default"> <div class="ucb-article-text" itemprop="articleBody"> <div> <div class="align-right image_style-medium_750px_50_display_size_"> <div class="imageMediaStyle medium_750px_50_display_size_"> <img loading="lazy" src="/ecee/sites/default/files/styles/medium_750px_50_display_size_/public/2026-02/Bright%20Lu%20headshot_0.jpeg?itok=x_aOiHiW" width="750" height="869" alt="Bright Lu headshot"> </div> <span class="media-image-caption"> <p><span>Lu at the new electron beam lithography system used to develop microresonators at COSINC.&nbsp;</span></p> </span> </div> <p>91ý researchers have built high performing optical microresonators opening the door for new sensor technologies.<br><br>At its simplest form, a microresonator is a tiny device that can trap light and build up its intensity.<br><br>Once the intensity is high enough, researchers can perform unique light operations.&nbsp;<br><br>“Our work is about using less optical power with these resonators for future uses,” said Bright Lu, a fourth-year doctoral student in electrical and computer engineering and a lead author on the study. “One day these microresonators can be adapted for a wide range of sensors from navigation to identifying chemicals.”<br><br>For this endeavor published in <a href="https://pubs.aip.org/aip/apl/article/128/8/081103/3380880/Ultrahigh-Q-chalcogenide-micro-racetrack" rel="nofollow">Applied Physics Letters</a>, the team focused on ‘racetrack’ resonators, named for their elongated shape that resembles a running track.&nbsp;<br><br>Specifically, researchers used ‘Euler curves’ — a type of smooth curve also found in road and railway design. Just as cars can’t make sharp right-angle turns in motion, light can not be forced into abrupt bends.<br><br>“These racetrack curves minimize bending loss,” said <a href="/ecee/wounjhang-won-park" rel="nofollow">Won Park</a>, Sheppard Professor of Electrical Engineering, a co-advisor on the study. “Our design choice was a key innovation of this project.”<br><br>By guiding light smoothly through the resonator, they dramatically reduced light loss, allowing photons to circulate longer and interact more strongly inside the device.<br><br>If too much light is lost, Lu says, high light intensities can’t be achieved for these microresonators to operate at the needed performance.&nbsp;<br><br><strong>Made in Colorado&nbsp;</strong></p><p>Incredibly small in size, the microresonators were built using the <a href="/facility/cosinc/" rel="nofollow">Colorado Shared Instrumentation in Nanofabrication and Characterization (COSINC)</a> clean room’s new electron beam lithography system.<br><br>The facility provides a highly-controlled environment required to work at the microscopic scales that can lead to reliable device performance.&nbsp;</p> <div class="align-right image_style-medium_750px_50_display_size_"> <div class="imageMediaStyle medium_750px_50_display_size_"> <img loading="lazy" src="/ecee/sites/default/files/styles/medium_750px_50_display_size_/public/2026-02/Microresonator.jpg?itok=fBx8wS9l" width="750" height="307" alt="micoresonator"> </div> <span class="media-image-caption"> <p><span>Optical waveguide microresonators on a chip created in this effort, which are ten times thinner than human hair.&nbsp;</span></p> </span> </div> <p>Many optical and photonic devices are smaller than the width of a piece of paper, meaning even tiny dust particles or surface imperfections can disrupt how light travels through a material.&nbsp;<br><br>“Traditional lithography uses photons and is fundamentally limited by the wavelength of light,” Lu said. “However, electron beam lithography has no such constraint. With electrons, we can realize our structures with sub-nanometer resolution, which is critical for our microresonators.”<br><br>For Lu, the hands-on fabrication process was a fulfilling aspect of the project.&nbsp;<br><br>“Clean rooms are just cool and you’re working with these massive, precise machines and then you get to see images of structures you made only microns wide. Turning a thin film of glass into a working optical circuit is really satisfying.”<br><br>A key success from the work was the ability of the researchers to use chalcogenides, a broad term encompassing a family of specialized semiconductor glasses.<br><br>“These chalcogenides are excellent materials for photonics because of their high transparency and nonlinearity,” said Park. “Our work represents one of the best performing devices using chalcogenides, if not the best.”<br><br>Chalcogenides were helpful since they have strong transparency for light to pass through the device at high intensities needed for microresonators.&nbsp;<br><br>However, the materials are not easy to process for the device, so there’s a balancing act to tread.&nbsp;<br><br>“Chalcogenides are difficult, but rewarding materials to operate for photonic nonlinear devices,” said <a href="/faculty/juliet-gopinath/" rel="nofollow">Professor Juilet Gopinath</a>, who has worked on this project with Park for more than ten years. “Our results showed that minimizing the bend loss enables ultra-low loss devices comparable to state-of-the-art in other materials platforms.”<br><br><strong>Measuring light at the microscale</strong></p> <div class="align-right image_style-medium_750px_50_display_size_"> <div class="imageMediaStyle medium_750px_50_display_size_"> <img loading="lazy" src="/ecee/sites/default/files/styles/medium_750px_50_display_size_/public/2026-02/James%20Erickson%20headshot.jpg?itok=t8aYDtqm" width="750" height="448" alt="James Erickson headshot"> </div> <span class="media-image-caption"> <p><span>Erikson with the optical setup for capturing data measuring absorption and thermal effects.</span></p> </span> </div> <p>Once fabricated, the microresonators were handed off for testing, work led by James Erikson, a physics PhD student specializing in laser-based measurements. He carefully aligned lasers with microscopic waveguides, coupling light into and out of the device while monitoring how it behaved inside.</p><p>They looked for ‘dips’ within the data in transmitted light that indicate resonance as photons get trapped. By analyzing the shape of those dips, they were able to extract properties like absorption and thermal effects.<br><br>“The most obvious indicator of device quality is the shape of the resonances and we want them to be deep and narrow, like a needle piercing through the signal background,” said Erikson. “We’ve been chasing this kind of resonator for a long time, and when we saw the sharp resonances on this new device we knew right away that we’d finally cracked the code.”<br><br>Erikson added, to make a good device you need to know how much light will be absorbed versus transmitted. Thermal effects become important when adding laser power as you run the risk of damaging the device.&nbsp;<br><br>“The way most materials interact with light also changes depending on the temperature of the material,” said Erikson, “so as a device heats up its properties can change and cause it to work differently.”<br><br>In the future, the microresonators could be used for compact microlasers, advanced chemical and biological sensors and even tools for quantum metrology and networking.<br><br>“Many photonic components from lasers, modulators and detectors are being developed and microresonators like ours will help tie all of those pieces together,” said Lu. “Eventually, the goal is to build something you could hand to a manufacturer and create hundreds of thousands of them.”</p></div> </div> </div> </div> </div> <div>91ý researchers have built high performing optical microresonators opening the door for new sensor technologies. In the future, the microresonators could be used for compact microlasers, advanced chemical and biological sensors and even tools for quantum metrology and networking.</div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/ecee/sites/default/files/styles/large_image_style/public/2026-02/COSINC_Cleanroom_0.jpg?itok=Z8sx_rrO" width="1500" height="814" alt="COSINC Cleanroom"> </div> </div> <div>On</div> <div>White</div> <div>The fabrication cleanroom facility provides state-of-the-art instrumentation including lithography, thin-film deposition and among others. (Credit: COSINC)</div> Mon, 23 Feb 2026 16:37:42 +0000 Charles Ferrer 2809 at /ecee An earthquake on a chip: New tech generates tiny waves, could make smartphones smaller, faster /ecee/2026/01/14/earthquake-chip-new-tech-generates-tiny-waves-could-make-smartphones-smaller-faster <span>An earthquake on a chip: New tech generates tiny waves, could make smartphones smaller, faster</span> <span><span>Charles Ferrer</span></span> <span><time datetime="2026-01-14T14:32:04-07:00" title="Wednesday, January 14, 2026 - 14:32">Wed, 01/14/2026 - 14:32</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/ecee/sites/default/files/styles/focal_image_wide/public/2026-01/phone%20thumbnail.jpg?h=04d92ac6&amp;itok=RfjtI8FW" width="1200" height="800" alt="smartphone"> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/ecee/taxonomy/term/52"> News </a> </div> <div role="contentinfo" class="container ucb-article-tags" itemprop="keywords"> <span class="visually-hidden">Tags:</span> <div class="ucb-article-tag-icon" aria-hidden="true"> <i class="fa-solid fa-tags"></i> </div> <a href="/ecee/taxonomy/term/203" hreflang="en">Photonics</a> <a href="/ecee/taxonomy/term/204" hreflang="en">electrical engineering</a> </div> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default"> <div class="ucb-article-text" itemprop="articleBody"> </div> </div> </div> </div> <div>A team of engineers has developed a new device that works like a laser but, instead of light, generates incredibly small vibrations called surface acoustic waves.</div> <script> window.location.href = `/today/2026/01/14/earthquake-chip-new-tech-generates-tiny-waves-could-make-smartphones-smaller-faster`; </script> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Wed, 14 Jan 2026 21:32:04 +0000 Charles Ferrer 2799 at /ecee