Connect with us

Artificial Intelligence

Quantum Material Exhibits “Non-Local” Behavior That Mimics Brain Function

New research shows a possible way to improve energy-efficient computing

Published

on

image 14
Credit: Mario Rojas / UC San Diego
Electrical stimuli passed between neighboring electrodes can also affect non-neighboring electrodes.
« Quantum Material Exhibits “Non-Local” Behavior That Mimics Brain Function

Newswise — We often believe computers are more efficient than humans. After all, computers can complete a complex math equation in a moment and can also recall the name of that one actor we keep forgetting. However, human brains can process complicated layers of information quickly, accurately, and with almost no energy input: recognizing a face after only seeing it once or instantly knowing the difference between a mountain and the ocean. These simple human tasks require enormous processing and energy input from computers, and even then, with varying degrees of accuracy.

Creating brain-like computers with minimal energy requirements would revolutionize nearly every aspect of modern life. Funded by the Department of Energy, Quantum Materials for Energy Efficient Neuromorphic Computing (Q-MEEN-C) — a nationwide consortium led by the University of California San Diego — has been at the forefront of this research. 

UC San Diego Assistant Professor of Physics Alex Frañó is co-director of Q-MEEN-C and thinks of the center’s work in phases. In the first phase, he worked closely with President Emeritus of University of California and Professor of Physics Robert Dynes, as well as Rutgers Professor of Engineering Shriram Ramanathan. Together, their teams were successful in finding ways to create or mimic the properties of a single brain element (such as a neuron or synapse) in a quantum material.

Now, in phase two, new research from Q-MEEN-C, published in Nano Letters, shows that electrical stimuli passed between neighboring electrodes can also affect non-neighboring electrodes. Known as non-locality, this discovery is a crucial milestone in the journey toward new types of devices that mimic brain functions known as neuromorphic computing.

“In the brain it’s understood that these non-local interactions are nominal — they happen frequently and with minimal exertion,” stated Frañó, one of the paper’s co-authors. “It’s a crucial part of how the brain operates, but similar behaviors replicated in synthetic materials are scarce.”

Like many research projects now bearing fruit, the idea to test whether non-locality in quantum materials was possible came about during the pandemic. Physical lab spaces were shuttered, so the team ran calculations on arrays that contained multiple devices to mimic the multiple neurons and synapses in the brain. In running these tests, they found that non-locality was theoretically possible.

When labs reopened, they refined this idea further and enlisted UC San Diego Jacobs School of Engineering Associate Professor Duygu Kuzum, whose work in electrical and computer engineering helped them turn a simulation into an actual device.

This involved taking a thin film of nickelate — a “quantum material” ceramic that displays rich electronic properties — inserting hydrogen ions, and then placing a metal conductor on top. A wire is attached to the metal so that an electrical signal can be sent to the nickelate. The signal causes the gel-like hydrogen atoms to move into a certain configuration and when the signal is removed, the new configuration remains.

Advertisement
Group4208

“This is essentially what a memory looks like,” stated Frañó. “The device remembers that you perturbed the material. Now you can fine tune where those ions go to create pathways that are more conductive and easier for electricity to flow through.” 

Traditionally, creating networks that transport sufficient electricity to power something like a laptop requires complicated circuits with continuous connection points, which is both inefficient and expensive. The design concept from Q-MEEN-C is much simpler because the non-local behavior in the experiment means all the wires in a circuit do not have to be connected to each other. Think of a spider web, where movement in one part can be felt across the entire web.

This is analogous to how the brain learns: not in a linear fashion, but in complex layers. Each piece of learning creates connections in multiple areas of the brain, allowing us to differentiate not just trees from dogs, but an oak tree from a palm tree or a golden retriever from a poodle.

To date, these pattern recognition tasks that the brain executes so beautifully, can only be simulated through computer software. AI programs like ChatGPT and Bard use complex algorithms to mimic brain-based activities like thinking and writing. And they do it really well. But without correspondingly advanced hardware to support it, at some point software will reach its limit.

Frañó is eager for a hardware revolution to parallel the one currently happening with software, and showing that it’s possible to reproduce non-local behavior in a synthetic material inches scientists one step closer. The next step will involve creating more complex arrays with more electrodes in more elaborate configurations.

“This is a very important step forward in our attempts to understand and simulate brain functions,” said Dynes, who is also a co-author. “Showing a system that has non-local interactions leads us further in the direction toward how our brains think. Our brains are, of course, much more complicated than this but a physical system that is capable of learning must be highly interactive and this is a necessary first step. We can now think of longer range coherence in space and time”

“It’s widely understood that in order for this technology to really explode, we need to find ways to improve the hardware — a physical machine that can perform the task in conjunction with the software,” Frañó stated. “The next phase will be one in which we create efficient machines whose physical properties are the ones that are doing the learning. That will give us a new paradigm in the world of artificial intelligence.”

Advertisement
Group4208

This work is primarily supported by  Quantum Materials for Energy Efficient Neuromorphic Computing, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences and funded by the U.S. Department of Energy (DE-SC0019273). A full list of funders can be found in the paper acknowledgements.

Journal Link: Nano Letters

Source: University of California San Diego

Author


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading
Advertisement

Tech

From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam

Published

on

images
Many of the AI images generated by spammers and scammers have religious themes. immortal70/iStock via Getty Images

Renee DiResta, Stanford University; Abhiram Reddy, Georgetown University, and Josh A. Goldstein, Georgetown University

Suppose you’ve spent time on Facebook over the past six months. In that case, you may have noticed photorealistic images that are too good to be true: children holding paintings that look like the work of professional artists, or majestic log cabin interiors that are the stuff of Airbnb dreams.

Others, such as renderings of Jesus made out of crustaceans, are just bizarre.

Like the AI image of the pope in a puffer jacket that went viral in May 2023, these AI-generated images are increasingly prevalent – and popular – on social media platforms. Even as many of them border on the surreal, they’re often used to bait engagement from ordinary users.

Our team of researchers from the Stanford Internet Observatory and Georgetown University’s Center for Security and Emerging Technology investigated over 100 Facebook pages that posted high volumes of AI-generated content. We published the results in March 2024 as a preprint paper, meaning the findings have not yet gone through peer review.

We explored patterns of images, unearthed evidence of coordination between some of the pages, and tried to discern the likely goals of the posters.

Page operators seemed to be posting pictures of AI-generated babies, kitchens or birthday cakes for a range of reasons.

There were content creators innocuously looking to grow their followings with synthetic content; scammers using pages stolen from small businesses to advertise products that don’t seem to exist; and spammers sharing AI-generated images of animals while referring users to websites filled with advertisements, which allow the owners to collect ad revenue without creating high-quality content.

Advertisement
Group4208

Our findings suggest that these AI-generated images draw in users – and Facebook’s recommendation algorithm may be organically promoting these posts.

Generative AI meets scams and spam

Internet spammers and scammers are nothing new.

For more than two decades, they’ve used unsolicited bulk email to promote pyramid schemes. They’ve targeted senior citizens while posing as Medicare representatives or computer technicians.

On social media, profiteers have used clickbait articles to drive users to ad-laden websites. Recall the 2016 U.S. presidential election, when Macedonian teenagers shared sensational political memes on Facebook and collected advertising revenue after users visited the URLs they posted. The teens didn’t care who won the election. They just wanted to make a buck.

In the early 2010s, spammers captured people’s attention with ads promising that anyone could lose belly fat or learn a new language with “one weird trick.”

AI-generated content has become another “weird trick.”

It’s visually appealing and cheap to produce, allowing scammers and spammers to generate high volumes of engaging posts. Some of the pages we observed uploaded dozens of unique images per day. In doing so, they followed Meta’s own advice for page creators. Frequent posting, the company suggests, helps creators get the kind of algorithmic pickup that leads their content to appear in the “Feed,” formerly known as the “News Feed.”

Advertisement
Group4208

Much of the content is still, in a sense, clickbait: Shrimp Jesus makes people pause to gawk and inspires shares purely because it is so bizarre.

Many users react by liking the post or leaving a comment. This signals to the algorithmic curators that perhaps the content should be pushed into the feeds of even more people.

Some of the more established spammers we observed, likely recognizing this, improved their engagement by pivoting from posting URLs to posting AI-generated images. They would then comment on the post of the AI-generated images with the URLs of the ad-laden content farms they wanted users to click.

But more ordinary creators capitalized on the engagement of AI-generated images, too, without obviously violating platform policies.

Rate ‘my’ work!

When we looked up the posts’ captions on CrowdTangle – a social media monitoring platform owned by Meta and set to sunset in August – we found that they were “copypasta” captions, which means that they were repeated across posts.

Some of the copypasta captions baited interaction by directly asking users to, for instance, rate a “painting” by a first-time artist – even when the image was generated by AI – or to wish an elderly person a happy birthday. Facebook users often replied to AI-generated images with comments of encouragement and congratulations

Algorithms push AI-generated content

Our investigation noticeably altered our own Facebook feeds: Within days of visiting the pages – and without commenting on, liking or following any of the material – Facebook’s algorithm recommended reams of other AI-generated content.

Advertisement
Group4208

Interestingly, the fact that we had viewed clusters of, for example, AI-generated miniature cow pages didn’t lead to a short-term increase in recommendations for pages focused on actual miniature cows, normal-sized cows or other farm animals. Rather, the algorithm recommended pages on a range of topics and themes, but with one thing in common: They contained AI-generated images.

In 2022, the technology website Verge detailed an internal Facebook memo about proposed changes to the company’s algorithm.

The algorithm, according to the memo, would become a “discovery-engine,” allowing users to come into contact with posts from individuals and pages they didn’t explicitly seek out, akin to TikTok’s “For You” page.

We analyzed Facebook’s own “Widely Viewed Content Reports,” which lists the most popular content, domains, links, pages and posts on the platform per quarter.

It showed that the proportion of content that users saw from pages and people they don’t follow steadily increased between 2021 and 2023. Changes to the algorithm have allowed more room for AI-generated content to be organically recommended without prior engagement – perhaps explaining our experiences and those of other users.

‘This post was brought to you by AI’

Since Meta currently does not flag AI-generated content by default, we sometimes observed users warning others about scams or spam AI content with infographics.

Meta, however, seems to be aware of potential issues if AI-generated content blends into the information environment without notice. The company has released several announcements about how it plans to deal with AI-generated content.

Advertisement
Group4208

In May 2024, Facebook will begin applying a “Made with AI” label to content it can reliably detect as synthetic.

But the devil is in the details. How accurate will the detection models be? What AI-generated content will slip through? What content will be inappropriately flagged? And what will the public make of such labels?

While our work focused on Facebook spam and scams, there are broader implications.

Reporters have written about AI-generated videos targeting kids on YouTube and influencers on TikTok who use generative AI to turn a profit.

Social media platforms will have to reckon with how to treat AI-generated content; it’s certainly possible that user engagement will wane if online worlds become filled with artificially generated posts, images and videos.

Shrimp Jesus may be an obvious fake. But the challenge of assessing what’s real is only heating up.

Renee DiResta, Research Manager of the Stanford Internet Observatory, Stanford University; Abhiram Reddy, Research Assistant at the Center for Security and Emerging Technology, Georgetown University, and Josh A. Goldstein, Research Fellow at the Center for Security and Emerging Technology, Georgetown University

Advertisement
Group4208

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The science section of our news blog STM Daily News provides readers with captivating and up-to-date information on the latest scientific discoveries, breakthroughs, and innovations across various fields. We offer engaging and accessible content, ensuring that readers with different levels of scientific knowledge can stay informed. Whether it’s exploring advancements in medicine, astronomy, technology, or environmental sciences, our science section strives to shed light on the intriguing world of scientific exploration and its profound impact on our daily lives. From thought-provoking articles to informative interviews with experts in the field, STM Daily News Science offers a harmonious blend of factual reporting, analysis, and exploration, making it a go-to source for science enthusiasts and curious minds alike. https://stmdailynews.com/category/science/

Author

Want more stories 👋
“Your morning jolt of Inspiring & Interesting Stories!”

Sign up to receive awesome articles directly to your inbox.

We don’t spam! Read our privacy policy for more info.

STM Coffee Newsletter 1
Advertisement
Group4208

Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Tech

Honeywell and Google Cloud to Accelerate Autonomous Operations with AI Agents for the Industrial Sector

Published

on

Google Cloud AI to enhance Honeywell’s product offerings 
and help upskill the industrial workforce

New solutions will connect to enterprise-wide industrial data from Honeywell Forge, 
a leading IoT platform for industrials

CHARLOTTE, N.C. and SUNNYVALE, Calif. /PRNewswire/ — Honeywell (NASDAQ: HON) and Google Cloud announced a unique collaboration connecting artificial intelligence (AI) agents with assets, people and processes to accelerate safer, autonomous operations for the industrial sector.

Google Cloud
The first solutions built with Google Cloud will be available to Honeywell customers in 2025.

This partnership will bring together the multimodality and natural language capabilities of Gemini on Vertex AI – Google Cloud’s AI platform – and the massive data set on Honeywell Forge, a leading Internet of Things (IoT) platform for industrials. This will unleash easy-to-understand, enterprise-wide insights across a multitude of use cases. Honeywell’s customers across the industrial sector will benefit from opportunities to reduce maintenance costs, increase operational productivity and upskill employees. The first solutions built with Google Cloud AI will be available to Honeywell’s customers in 2025.

“The path to autonomy requires assets working harder, people working smarter and processes working more efficiently,” said Vimal Kapur, Chairman and CEO of Honeywell. “By combining Google Cloud’s AI technology with our deep domain expertise–including valuable data on our Honeywell Forge platform–customers will receive unparalleled, actionable insights bridging the physical and digital worlds to accelerate autonomous operations, a key driver of Honeywell’s growth.”

“Our partnership with Honeywell represents a significant step forward in bringing the transformative power of AI to industrial operations,” said Thomas Kurian, CEO of Google Cloud. “With Gemini on Vertex AI, combined with Honeywell’s industrial data and expertise, we’re creating new opportunities to optimize processes, empower workforces and drive meaningful business outcomes for industrial organizations worldwide.”

With the mass retirement of workers from the baby boomer generation, the industrial sector faces both labor and skills shortages, and AI can be part of the solution – as a revenue generator, not job eliminator. More than two-thirds (82%) of Industrial AI leaders believe their companies are early adopters of AI, but only 17% have fully launched their initial AI plans, according to Honeywell’s 2024 Industrial AI Insights report. This partnership will provide AI agents that augment the existing operations and workforce to help drive AI adoption and enable companies across the sector to benefit from expanding automation.

Honeywell and Google Cloud will co-innovate solutions around:

Advertisement
Group4208

Purpose-Built, Industrial AI Agents 
Built on Google Cloud’s Vertex AI Search and tailored to engineers’ specific needs, a new AI-powered agent will help automate tasks and reduce project design cycles, enabling users to focus on driving innovation and delivering exceptional customer experiences.

Additional agents will utilize Google’s large language models (LLMs) to help technicians to more quickly resolve maintenance issues (e.g., “How did a unit perform last night?” “How do I replace the input/output module?” or “Why is my system making this sound?”). By leveraging Gemini’s multimodality capabilities, users will be able to process various data types such as images, videos, text and sensor readings, which will help its engineers get the answers they need quickly – going beyond simple chat and predictions.

Enhanced Cybersecurity
Google Threat Intelligence – featuring frontline insight from Mandiant – will be integrated into current Honeywell cybersecurity products, including Global Analysis, Research and Defense (GARD) Threat Intelligence and Secure Media Exchange (SMX), to help enhance threat detection and protect global infrastructure for industrial customers. 

On-the-Edge Device Advances 
Looking ahead, Honeywell will explore using Google’s Gemini Nano model to enhance Honeywell edge AI devices’ intelligence multiple use cases across verticals, ranging from scanning performance to voice-based guided workflow, maintenance, operational and alarm assist without the need to connect to the internet and cloud. This is the beginning of a new wave of more intelligent devices and solutions, which will be the subject of future Honeywell announcements.

By leveraging AI to enable growth and productivity, the integration of Google Cloud technology also further supports Honeywell’s alignment of its portfolio to three compelling megatrends, including automation.

About Honeywell

Honeywell is an integrated operating company serving a broad range of industries and geographies around the world. Our business is aligned with three powerful megatrends – automation, the future of aviation and energy transition – underpinned by our Honeywell Accelerator operating system and Honeywell Forge IoT platform. As a trusted partner, we help organizations solve the world’s toughest, most complex challenges, providing actionable solutions and innovations through our Aerospace Technologies, Industrial Automation, Building Automation and Energy and Sustainability Solutions business segments that help make the world smarter and safer as well as more secure and sustainable. For more news and information on Honeywell, please visit www.honeywell.com/newsroom.

About Google Cloud

Google Cloud is the new way to the cloud, providing AI, infrastructure, developer, data, security, and collaboration tools built for today and tomorrow. Google Cloud offers a powerful, fully integrated, and optimized AI stack with its own planet-scale infrastructure, custom-built chips, generative AI models, and development platform, as well as AI-powered applications, to help organizations transform. Customers in more than 200 countries and territories turn to Google Cloud as their trusted technology partner.

Advertisement
Group4208

SOURCE Honeywell

The science section of our news blog STM Daily News provides readers with captivating and up-to-date information on the latest scientific discoveries, breakthroughs, and innovations across various fields. We offer engaging and accessible content, ensuring that readers with different levels of scientific knowledge can stay informed. Whether it’s exploring advancements in medicine, astronomy, technology, or environmental sciences, our science section strives to shed light on the intriguing world of scientific exploration and its profound impact on our daily lives. From thought-provoking articles to informative interviews with experts in the field, STM Daily News Science offers a harmonious blend of factual reporting, analysis, and exploration, making it a go-to source for science enthusiasts and curious minds alike. https://stmdailynews.com/category/science/


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Science

ChatGPT and the movie ‘Her’ are just the latest example of the ‘sci-fi feedback loop’

Published

on

ChatGPT
ChatGPT-4o and the films ‘Her’ and ‘Blade Runner 2049’ all pull from one another as they develop the concept of a virtual assistant. Warner Bros.

Rizwan Virk, Arizona State University

ChatGPT

In May 2024, OpenAI CEO Sam Altman sparked a firestorm by referencing the 2013 movie “Her” to highlight the novelty of the latest iteration of ChatGPT.

Within days, actor Scarlett Johansson, who played the voice of Samantha, the AI girlfriend of the protagonist in the movie “Her,” accused the company of improperly using her voice after she had spurned their offer to make her the voice of ChatGPT’s new virtual assistant. Johansson ended up suing OpenAI and has been invited to testify before Congress.

This tiff highlights a broader interchange between Hollywood and Silicon Valley that’s called the “sci-fi feedback loop.” The subject of my doctoral research, the sci-fi feedback loop explores how science fiction and technological innovation feed off each other. This dynamic is bidirectional and can sometimes play out over many decades, resulting in an ongoing loop.

Fiction sparks dreams of Moon travel

One of the most famous examples of this loop is Moon travel.

Jules Verne’s 1865 novel “From the Earth to the Moon” and the fiction of H.G. Wells inspired one of the first films to visualize such a journey, 1902’s “A Trip to the Moon.”

The fiction of Verne and Wells also influenced future rocket scientists such as Robert Goddard, Hermann Oberth and Oberth’s better-known protégé, Wernher von Braun. The innovations of these men – including the V-2 rocket built by von Braun during World War II – inspired works of science fiction, such as the 1950 film “Destination Moon,” which included a rocket that looked just like the V-2.

Films like “Destination Moon” would then go on to bolster public support for lavish government spending on the space program. https://www.youtube.com/embed/xLVChRVfZ74?wmode=transparent&start=0 The 1902 silent short ‘A Trip to the Moon.’

Advertisement
Group4208

Creative symbiosis

The sci-fi feedback loop generally follows the same cycle.

First, the technological climate of a given era will shape that period’s science fiction. For example, the personal computing revolution of the 1970s and 1980s directly inspired the works of cyberpunk writers Neal Stephenson and William Gibson.

Then the sci-fi that emerges will go on to inspire real-world technological innovation. In his 1992 classic “Snow Crash,” Stephenson coined the term “metaverse” to describe a 3-D, video game-like world accessed through virtual reality goggles.

Silicon Valley entrepreneurs and innovators have been trying to build a version of this metaverse ever since. The virtual world of the video game Second Life, released in 2003, took a stab at this: Players lived in virtual homes, went to virtual dance clubs and virtual concerts with virtual girlfriends and boyfriends, and were even paid virtual dollars for showing up at virtual jobs.

This technology seeded yet more fiction; in my research, I discovered that sci-fi novelist Ernest Cline had spent a lot of time playing Second Life, and it inspired the metaverse of his bestselling novel “Ready Player One.”

The cycle continued: Employees of Oculus VR – now known as Meta Reality Labs – were given copies of “Ready Player One” to read as they developed the company’s virtual reality headsets. When Facebook changed its name to Meta in 2021, it did so in the hopes of being at the forefront of building the metaverse, though the company’s grand ambitions have tempered somewhat.

Digitally rendered woman wearing pink outfit strolls along a runway.
Metaverse Fashion Week, the first virtual fashion week, was hosted by the Decentraland virtual world in 2022. Vittorio Zunino Celotto/Getty Images

Another sci-fi franchise that has its fingerprints all over this loop is “Star Trek,” which first aired in 1966, right in the middle of the space race.

Steve Perlman, the inventor of Apple’s QuickTime media format and player, said he was inspired by an episode of “Star Trek: The Next Generation,” in which Lt. Commander Data, an android, sifts through multiple streams of audio and video files. And Rob Haitani, the designer of the Palm Pilot’s operating system, has said that the bridge on the Enterprise influenced its interface.

Advertisement
Group4208

In my research, I also discovered that the show’s Holodeck – a room that could simulate any environment – influenced both the name and the development of Microsoft’s HoloLens augmented reality glasses.

From ALICE to ‘Her’

Which brings us back to OpenAI and “Her.”

In the movie, the protagonist, Theodore, played by Joaquin Phoenix, acquires an AI assistant, “Samantha,” voiced by Johansson. He begins to develop feelings for Samantha – so much so that he starts to consider her his girlfriend.

ChatGPT-4o, the latest version of the generative AI software, seems to be able to cultivate a similar relationship between user and machine. Not only can ChatGPT-4o speak to you and “understand” you, but it can also do so sympathetically, as a romantic partner would.

There’s little doubt that the depiction of AI in “Her” influenced OpenAI’s developers. In addition to Altman’s tweet, the company’s promotional videos for ChatGPT-4o feature a chatbot speaking with a job candidate before his interview, propping him up and encouraging him – as, well, an AI girlfriend would. The AI featured in the clips, Ars Technica observed, was “disarmingly lifelike,” and willing “to laugh at your jokes and your dumb hat.”

But you might be surprised to learn that a previous generation of chatbots inspired Spike Jonze, the director and screenwriter of “Her,” to write the screenplay in the first place. Nearly a decade before the film’s release, Jonze had interacted with a version of the ALICE chatbot, which was one of the first chatbots to have a defined personality – in ALICE’s case, that of a young woman.

Young man wearing tuxedo smiles as he holds a gold statuette.
Filmmaker Spike Jonze won the Oscar for best original screenplay for ‘Her’ in 2014. Kevork Djansezian/Getty Images

The ALICE chatbot won the Loebner Prize three times, which was awarded annually until 2019 to the AI software that came closest to passing the Turing Test, long seen as a threshold for determining whether artificial intelligence has become indistinguishable from human intelligence.

The sci-fi feedback loop has no expiration date. AI’s ability to form relationships with humans is a theme that continues to be explored in fiction and real life.

Advertisement
Group4208

A few years after “Her,” “Blade Runner 2049” featured a virtual girlfriend, Joi, with a holographic body. Well before the latest drama with OpenAI, companies had started developing and pitching virtual girlfriends, a process that will no doubt continue. As science fiction writer and social media critic Cory Doctorow wrote in 2017, “Science fiction does something better than predict the future: It influences it.”

Rizwan Virk, Faculty Associate, PhD Candidate in Human and Social Dimensions of Science and Technology, Arizona State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The science section of our news blog STM Daily News provides readers with captivating and up-to-date information on the latest scientific discoveries, breakthroughs, and innovations across various fields. We offer engaging and accessible content, ensuring that readers with different levels of scientific knowledge can stay informed. Whether it’s exploring advancements in medicine, astronomy, technology, or environmental sciences, our science section strives to shed light on the intriguing world of scientific exploration and its profound impact on our daily lives. From thought-provoking articles to informative interviews with experts in the field, STM Daily News Science offers a harmonious blend of factual reporting, analysis, and exploration, making it a go-to source for science enthusiasts and curious minds alike. https://stmdailynews.com/category/science/


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Trending