Will BlackRock’s ETF slingshot Bitcoin’s price skyward?

- Posted in Uncategorized by
            <p style="float:right; margin:0 0 10px 15px; width:240px;"><img src="https://images.cointelegraph.com/cdn-cgi/image/format=auto,onerror=redirect,quality=90,width=840/https://s3.cointelegraph.com/uploads/2023-06/1451c508-efec-4859-a53f-f99c6b7da603.jpg"></p><p>Have the world’s largest financial firms finally “seen the light” with Bitcoin? Will demand outstrip supply, making a BTC price rise inevitable?</p>

Source: Will BlackRock’s ETF slingshot Bitcoin’s price skyward?

When computer vision works more like a brain, it sees more like people do

- Posted in Uncategorized by

From cameras to self-driving cars, many of today’s technologies depend on artificial intelligence to extract meaning from visual information. Today’s AI technology has artificial neural networks at its core, and most of the time we can trust these AI computer vision systems to see things the way we do — but sometimes they falter. According to MIT and IBM research scientists, one way to improve computer vision is to instruct the artificial neural networks that they rely on to deliberately mimic the way the brain’s biological neural network processes visual images.

Researchers led by MIT Professor James DiCarlo, the director of MIT’s Quest for Intelligence and member of the MIT-IBM Watson AI Lab, have made a computer vision model more robust by training it to work like a part of the brain that humans and other primates rely on for object recognition. This May, at the International Conference on Learning Representations, the team reported that when they trained an artificial neural network using neural activity patterns in the brain’s inferior temporal (IT) cortex, the artificial neural network was more robustly able to identify objects in images than a model that lacked that neural training. And the model’s interpretations of images more closely matched what humans saw, even when images included minor distortions that made the task more difficult.

Comparing neural circuits

Many of the artificial neural networks used for computer vision already resemble the multilayered brain circuits that process visual information in humans and other primates. Like the brain, they use neuron-like units that work together to process information. As they are trained for a particular task, these layered components collectively and progressively process the visual information to complete the task — determining, for example, that an image depicts a bear or a car or a tree.

DiCarlo and others previously found that when such deep-learning computer vision systems establish efficient ways to solve visual problems, they end up with artificial circuits that work similarly to the neural circuits that process visual information in our own brains. That is, they turn out to be surprisingly good scientific models of the neural mechanisms underlying primate and human vision.

That resemblance is helping neuroscientists deepen their understanding of the brain. By demonstrating ways visual information can be processed to make sense of images, computational models suggest hypotheses about how the brain might accomplish the same task. As developers continue to refine computer vision models, neuroscientists have found new ideas to explore in their own work.

“As vision systems get better at performing in the real world, some of them turn out to be more human-like in their internal processing. That’s useful from an understanding-biology point of view,” says DiCarlo, who is also a professor of brain and cognitive sciences and an investigator at the McGovern Institute for Brain Research.

Engineering a more brain-like AI

While their potential is promising, computer vision systems are not yet perfect models of human vision. DiCarlo suspected one way to improve computer vision may be to incorporate specific brain-like features into these models.

To test this idea, he and his collaborators built a computer vision model using neural data previously collected from vision-processing neurons in the monkey IT cortex — a key part of the primate ventral visual pathway involved in the recognition of objects — while the animals viewed various images. More specifically, Joel Dapello, a Harvard University graduate student and former MIT-IBM Watson AI Lab intern; and Kohitij Kar, assistant professor and Canada Research Chair (Visual Neuroscience) at York University and visiting scientist at MIT; in collaboration with David Cox, IBM Research’s vice president for AI models and IBM director of the MIT-IBM Watson AI Lab; and other researchers at IBM Research and MIT asked an artificial neural network to emulate the behavior of these primate vision-processing neurons while the network learned to identify objects in a standard computer vision task.

“In effect, we said to the network, ‘please solve this standard computer vision task, but please also make the function of one of your inside simulated “neural” layers be as similar as possible to the function of the corresponding biological neural layer,’” DiCarlo explains. “We asked it to do both of those things as best it could.” This forced the artificial neural circuits to find a different way to process visual information than the standard, computer vision approach, he says.

After training the artificial model with biological data, DiCarlo’s team compared its activity to a similarly-sized neural network model trained without neural data, using the standard approach for computer vision. They found that the new, biologically informed model IT layer was — as instructed — a better match for IT neural data.  That is, for every image tested, the population of artificial IT neurons in the model responded more similarly to the corresponding population of biological IT neurons.

The researchers also found that the model IT was also a better match to IT neural data collected from another monkey, even though the model had never seen data from that animal, and even when that comparison was evaluated on that monkey’s IT responses to new images. This indicated that the team’s new, “neurally aligned” computer model may be an improved model of the neurobiological function of the primate IT cortex — an interesting finding, given that it was previously unknown whether the amount of neural data that can be currently collected from the primate visual system is capable of directly guiding model development.

With their new computer model in hand, the team asked whether the “IT neural alignment” procedure also leads to any changes in the overall behavioral performance of the model. Indeed, they found that the neurally-aligned model was more human-like in its behavior — it tended to succeed in correctly categorizing objects in images for which humans also succeed, and it tended to fail when humans also fail.

Adversarial attacks

The team also found that the neurally aligned model was more resistant to “adversarial attacks” that developers use to test computer vision and AI systems. In computer vision, adversarial attacks introduce small distortions into images that are meant to mislead an artificial neural network.

“Say that you have an image that the model identifies as a cat. Because you have the knowledge of the internal workings of the model, you can then design very small changes in the image so that the model suddenly thinks it’s no longer a cat,” DiCarlo explains.

These minor distortions don’t typically fool humans, but computer vision models struggle with these alterations. A person who looks at the subtly distorted cat still reliably and robustly reports that it’s a cat. But standard computer vision models are more likely to mistake the cat for a dog, or even a tree.

“There must be some internal differences in the way our brains process images that lead to our vision being more resistant to those kinds of attacks,” DiCarlo says. And indeed, the team found that when they made their model more neurally aligned, it became more robust, correctly identifying more images in the face of adversarial attacks. The model could still be fooled by stronger “attacks,” but so can people, DiCarlo says. His team is now exploring the limits of adversarial robustness in humans.

A few years ago, DiCarlo’s team found they could also improve a model’s resistance to adversarial attacks by designing the first layer of the artificial network to emulate the early visual processing layer in the brain. One key next step is to combine such approaches — making new models that are simultaneously neurally aligned at multiple visual processing layers.

The new work is further evidence that an exchange of ideas between neuroscience and computer science can drive progress in both fields. “Everybody gets something out of the exciting virtuous cycle between natural/biological intelligence and artificial intelligence,” DiCarlo says. “In this case, computer vision and AI researchers get new ways to achieve robustness, and neuroscientists and cognitive scientists get more accurate mechanistic models of human vision.”

This work was supported by the MIT-IBM Watson AI Lab, Semiconductor Research Corporation, the U.S. Defense Research Projects Agency, the MIT Shoemaker Fellowship, U.S. Office of Naval Research, the Simons Foundation, and Canada Research Chair Program.

Source: When computer vision works more like a brain, it sees more like people do

Educating national security leaders on artificial intelligence

- Posted in Uncategorized by

Understanding artificial intelligence and how it relates to matters of national security has become a top priority for military and government leaders in recent years. A new three-day custom program entitled “Artificial Intelligence for National Security Leaders” — AI4NSL for short — aims to educate leaders who may not have a technical background on the basics of AI, machine learning, and data science, and how these topics intersect with national security.

“National security fundamentally is about two things: getting information out of sensors and processing that information. These are two things that AI excels at. The AI4NSL class engages national security leaders in understanding how to navigate the benefits and opportunities that AI affords, while also understanding its potential negative consequences,” says Aleksander Madry, the Cadence Design Systems Professor at MIT and one of the course’s faculty directors.

Organized jointly by MIT’s School of Engineering, MIT Stephen A. Schwarzman College of Computing, and MIT Sloan Executive Education, AI4NSL wrapped up its fifth cohort in April. The course brings leaders from every branch of the U.S. military, as well as some foreign military leaders from NATO, to MIT’s campus, where they learn from faculty experts on a variety of technical topics in AI, as well as how to navigate organizational challenges that arise in this context.

“We set out to put together a real executive education class on AI for senior national security leaders,” says Madry. “For three days, we are teaching these leaders not only an understanding of what this technology is about, but also how to best adopt these technologies organizationally.”

The original idea sprang from discussions with senior U.S. Air Force (USAF) leaders and members of the Department of the Air Force (DAF)-MIT AI Accelerator in 2019.

According to Major John Radovan, former deputy director of the DAF-MIT AI Accelerator, in recent years it has become clear that national security leaders needed a deeper understanding of AI technologies and its implications on security, warfare, and military operations. In February 2020, Radovan and his team at the DAF-MIT AI Accelerator started building a custom course to help guide senior leaders in their discussions about AI.

“This is the only course out there that is focused on AI specifically for national security,” says Radovan. “We didn’t want to make this course just for members of the Air Force — it had to be for all branches of the military. If we are going to operate as a joint force, we need to have the same vocabulary and the same mental models about how to use this technology.”

After a pilot program in collaboration with MIT Open Learning and the MIT Computer Science and Artificial Intelligence Laboratory, Radovan connected with faculty at the School of Engineering and MIT Schwarzman College of Computing, including Madry, to refine the course’s curriculum. They enlisted the help of colleagues and faculty at MIT Sloan Executive Education to refine the class’s curriculum and cater the content to its audience. The result of this cross-school collaboration was a new iteration of AI4NSL, which was launched last summer.

In addition to providing participants with a basic overview of AI technologies, the course places a heavy emphasis on organizational planning and implementation.

“What we wanted to do was to create smart consumers at the command level. The idea was to present this content at a higher level so that people could understand the key frameworks, which will guide their thinking around the use and adoption of this material,” says Roberto Fernandez, the William F. Pounds Professor of Management and one of the AI4NSL instructors, as well as the other course’s faculty director.

During the three-day course, instructors from MIT’s Department of Electrical Engineering and Computer Science, Department of Aeronautics and Astronautics, and MIT Sloan School of Management cover a wide range of topics.

The first half of the course starts with a basic overview of concepts including AI, machine learning, deep learning, and the role of data. Instructors also present the problems and pitfalls of using AI technologies, including the potential for adversarial manipulation of machine learning systems, privacy challenges, and ethical considerations.

In the middle of day two, the course shifts to examine the organizational perspective, encouraging participants to consider how to effectively implement these technologies in their own units.

“What’s exciting about this course is the way it is formatted first in terms of understanding AI, machine learning, what data is, and how data feeds AI, and then giving participants a framework to go back to their units and build a strategy to make this work,” says Colonel Michelle Goyette, director of the Army Strategic Education Program at the Army War College and an AI4NSL participant.

Throughout the course, breakout sessions provide participants with an opportunity to collaborate and problem-solve on an exercise together. These breakout sessions build upon one another as the participants are exposed to new concepts related to AI.

“The breakout sessions have been distinctive because they force you to establish relationships with people you don’t know, so the networking aspect is key. Any time you can do more than receive information and actually get into the application of what you were taught, that really enhances the learning environment,” says Lieutenant General Brian Robinson, the commander of Air Education and Training Command for the USAF and an AI4NSL participant.

This spirit of teamwork, collaboration, and bringing together individuals from different backgrounds permeates the three-day program. The AI4NSL classroom not only brings together national security leaders from all branches of the military, it also brings together faculty from three schools across MIT.

“One of the things that's most exciting about this program is the kind of overarching theme of collaboration,” says Rob Dietel, director of executive programs at Sloan School of Management. “We're not drawing just from the MIT Sloan faculty, we're bringing in top faculty from the Schwarzman College of Computing and the School of Engineering. It's wonderful to be able to tap into those resources that are here on MIT’s campus to really make it the most impactful program that we can.”

As new developments in generative AI, such as ChatGPT, and machine learning alter the national security landscape, the organizers at AI4NSL will continue to update the curriculum to ensure it is preparing leaders to understand the implications for their respective units.

“The rate of change for AI and national security is so fast right now that it's challenging to keep up, and that's part of the reason we've designed this program. We've brought in some of our world-class faculty from different parts of MIT to really address the changing dynamic of AI,” adds Dietel.

Source: Educating national security leaders on artificial intelligence

Judith Herman’s “Truth and Repair,” Part 3: Applications to therapeutic jurisprudence

- Posted in Uncategorized by

In this, my third look at Dr. Judith Herman’s important new examination of psychological trauma, Truth and Repair: How Trauma Survivors Envision Justice (2023), I would like to connect the theme of trauma and justice to therapeutic jurisprudence (TJ), a multidisciplinary school of theory and practice that examines the therapeutic and anti-therapeutic properties of law, legal processes, and legal institutions.

To summarize: Dr. Herman holds that the final stage of recovering from trauma is justice, reasoning that “(i)f trauma is truly a social problem, and indeed it is, then recovery cannot be simply a private, individual matter.” She identifies acknowledgment, apology, and accountability as the key elements of justice. Also, she identifies restitution, rehabilitation (of the offender), and prevention as the key elements of healing.

Therapeutic jurisprudence

Basic TJ principles hold that, whenever reasonably possible, outcomes of legal events (e.g., litigation, negotiation, or drafting of documents such as wills and trusts) should affirm the dignity and promote the psychological health of the parties involved.

These general goals are a strong match for Dr. Herman’s elements of justice and healing.

Truth and Repair frequently endorses restorative justice (RJ) — a concept and practice often mentioned in the same breath as TJ — as a promising avenue toward helping trauma survivors obtain justice. Herman invokes Australian criminologist and RJ adherent John Braithwaite in observing that RJ is about focusing on “repairing the harm of a crime rather than punishing offenders for breaking a law.” In fact, Braithwaite’s own work has closely analyzed what he sees as the similarities and differences between RJ and TJ.

I’ve noted on many occasions that therapeutic jurisprudence scholarship and practice need to better incorporate trauma-informed understandings and perspectives. Dr. Herman’s positing that justice is a final recovery step for trauma survivors significantly helps us to link trauma-informed prevention and response to TJ, and vice-versa.

***

Additional Reading

For free access to John Braithwaite’s comparison and contrast of TJ and RJ, “Restorative Justice and Therapeutic Jurisprudence,” Criminal Law Bulletin (2002), go here.

For free access to my extensive survey of therapeutic jurisprudence, “Therapeutic Jurisprudence: Foundations, Expansion, and Assessment,” University of Miami Law Review (2021), go here.

 

Source: Judith Herman’s “Truth and Repair,” Part 3: Applications to therapeutic jurisprudence

Researchers teach an AI to write better chart captions

- Posted in Uncategorized by

Chart captions that explain complex trends and patterns are important for improving a reader’s ability to comprehend and retain the data being presented. And for people with visual disabilities, the information in a caption often provides their only means of understanding the chart.

But writing effective, detailed captions is a labor-intensive process. While autocaptioning techniques can alleviate this burden, they often struggle to describe cognitive features that provide additional context.

To help people author high-quality chart captions, MIT researchers have developed a dataset to improve automatic captioning systems. Using this tool, researchers could teach a machine-learning model to vary the level of complexity and type of content included in a chart caption based on the needs of users.

The MIT researchers found that machine-learning models trained for autocaptioning with their dataset consistently generated captions that were precise, semantically rich, and described data trends and complex patterns. Quantitative and qualitative analyses revealed that their models captioned charts more effectively than other autocaptioning systems.  

The team’s goal is to provide the dataset, called VisText, as a tool researchers can use as they work on the thorny problem of chart autocaptioning. These automatic systems could help provide captions for uncaptioned online charts and improve accessibility for people with visual disabilities, says co-lead author Angie Boggust, a graduate student in electrical engineering and computer science at MIT and member of the Visualization Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL).

“We’ve tried to embed a lot of human values into our dataset so that when we and other researchers are building automatic chart-captioning systems, we don’t end up with models that aren’t what people want or need,” she says.

Boggust is joined on the paper by co-lead author and fellow graduate student Benny J. Tang and senior author Arvind Satyanarayan, associate professor of computer science at MIT who leads the Visualization Group in CSAIL. The research will be presented at the Annual Meeting of the Association for Computational Linguistics.

Human-centered analysis

The researchers were inspired to develop VisText from prior work in the Visualization Group that explored what makes a good chart caption. In that study, researchers found that sighted users and blind or low-vision users had different preferences for the complexity of semantic content in a caption. 

The group wanted to bring that human-centered analysis into autocaptioning research. To do that, they developed VisText, a dataset of charts and associated captions that could be used to train machine-learning models to generate accurate, semantically rich, customizable captions.

Developing effective autocaptioning systems is no easy task. Existing machine-learning methods often try to caption charts the way they would an image, but people and models interpret natural images differently from how we read charts. Other techniques skip the visual content entirely and caption a chart using its underlying data table. However, such data tables are often not available after charts are published.

Given the shortfalls of using images and data tables, VisText also represents charts as scene graphs. Scene graphs, which can be extracted from a chart image, contain all the chart data but also include additional image context.

“A scene graph is like the best of both worlds — it contains almost all the information present in an image while being easier to extract from images than data tables. As it’s also text, we can leverage advances in modern large language models for captioning,” Tang explains.

They compiled a dataset that contains more than 12,000 charts — each represented as a data table, image, and scene graph — as well as associated captions. Each chart has two separate captions: a low-level caption that describes the chart’s construction (like its axis ranges) and a higher-level caption that describes statistics, relationships in the data, and complex trends.

The researchers generated low-level captions using an automated system and crowdsourced higher-level captions from human workers.

“Our captions were informed by two key pieces of prior research: existing guidelines on accessible descriptions of visual media and a conceptual model from our group for categorizing semantic content. This ensured that our captions featured important low-level chart elements like axes, scales, and units for readers with visual disabilities, while retaining human variability in how captions can be written,” says Tang.

Translating charts

Once they had gathered chart images and captions, the researchers used VisText to train five machine-learning models for autocaptioning. They wanted to see how each representation — image, data table, and scene graph — and combinations of the representations affected the quality of the caption.

“You can think about a chart captioning model like a model for language translation. But instead of saying, translate this German text to English, we are saying translate this ‘chart language’ to English,” Boggust says.

Their results showed that models trained with scene graphs performed as well or better than those trained using data tables. Since scene graphs are easier to extract from existing charts, the researchers argue that they might be a more useful representation.

They also trained models with low-level and high-level captions separately. This technique, known as semantic prefix tuning, enabled them to teach the model to vary the complexity of the caption’s content.

In addition, they conducted a qualitative examination of captions produced by their best-performing method and categorized six types of common errors. For instance, a directional error occurs if a model says a trend is decreasing when it is actually increasing.

This fine-grained, robust qualitative evaluation was important for understanding how the model was making its errors. For example, using quantitative methods, a directional error might incur the same penalty as a repetition error, where the model repeats the same word or phrase. But a directional error could be more misleading to a user than a repetition error. The qualitative analysis helped them understand these types of subtleties, Boggust says.

These sorts of errors also expose limitations of current models and raise ethical considerations that researchers must consider as they work to develop autocaptioning systems, she adds.

Generative machine-learning models, such as those that power ChatGPT, have been shown to hallucinate or give incorrect information that can be misleading. While there is a clear benefit to using these models for autocaptioning existing charts, it could lead to the spread of misinformation if charts are captioned incorrectly.

“Maybe this means that we don’t just caption everything in sight with AI. Instead, perhaps we provide these autocaptioning systems as authorship tools for people to edit. It is important to think about these ethical implications throughout the research process, not just at the end when we have a model to deploy,” she says.

Boggust, Tang, and their colleagues want to continue optimizing the models to reduce some common errors. They also want to expand the VisText dataset to include more charts, and more complex charts, such as those with stacked bars or multiple lines. And they would also like to gain insights into what these autocaptioning models are actually learning about chart data.

This research was supported, in part, by a Google Research Scholar Award, the National Science Foundation, the MLA@CSAIL Initiative, and the United States Air Force Research Laboratory.

Source: Researchers teach an AI to write better chart captions

Transatlantic connections make the difference for MIT Portugal

- Posted in Uncategorized by

Successful relationships take time to develop, with both parties investing energy and resources and fostering mutual trust and understanding. The MIT Portugal Program (MPP), a strategic partnership between MIT, Portuguese universities and research institutions, and the Portuguese government, is a case in point.

Portugal’s inaugural partnership with a U.S. university, MPP was established in 2006 as a collaboration between MIT and the Portuguese Science and Technology Foundation (Fundação para a Ciência e Tecnologia, or FCT). Since then, the program has developed research platforms in areas such as bioengineering, sustainable energy, transportation systems, engineering design, and advanced manufacturing. Now halfway through its third phase (MPP2030, begun in 2018), the program owes much of its success to the bonds connecting institutions and people across the Atlantic over the past 17 years.

“When you look at the successes and the impact, these things don’t happen overnight,” says John Hansman, the T. Wilson Professor of Aeronautics and Astronautics at MIT and co-director of MPP, noting, in particular, MPP’s achievements in the areas of energy and ocean research, as well as bioengineering. “This has been a longstanding relationship that we have and want to continue. I think it’s been beneficial to Portugal and to MIT. I think you can argue it has made substantial contributions to the success that Portugal is currently experiencing both in its technical capabilities and also its energy policy.”

With research often aimed at climate and sustainability solutions, one of MPP’s key strengths is its education of future leaders in science, technology, and entrepreneurship. And the program’s impacts carry forward, as several former MPP students are now on the faculty at participating Portuguese universities.

“The original intent of working together with Portugal was to try to establish collaboration between universities and to instill some of the MIT culture with the culture in Portugal, and I think that’s been hugely successful,” says Douglas Hart, MPP co-director and professor of mechanical engineering at MIT. “It has had a lot of impacts in terms of the research, but also the people.”

One of those people is André Pina, associate director of H2 strategy and origination at the company EDP, who was in residence at MIT in 2014 as part of the MPP Sustainable Energy Systems Doctoral Program. He says the competencies and experiences he acquired have been critical to his professional development in energy system planning, have influenced his approach to problem solving, and have allowed him to bring "holistic thinking" to business endeavors.

"The MIT Portugal Program has created a collaborative ecosystem between Portuguese universities, companies, and MIT that enabled the training of highly qualified professionals, while contributing to the positioning of Portuguese companies in new cutting-edge fields,” he says.

Building on MPP’s previous successes, MPP2030 focuses on advancing research in four strategic areas: climate science and climate change; earth systems from oceans to near space; digital transformation in manufacturing; and sustainable cities — all involving data science-intensive approaches and methodologies. Within these broad scientific areas, FCT funding has enabled seven collaborative large-scale “flagship” projects between Portuguese and MIT researchers during the current phase, as well as dozens of smaller projects.

Flagship projects currently underway include:

·   AEROS Constellation

·   C-Tech: Climate Driven Technologies for Low Carbon Cities

·   K2D: Knowledge and Data from the Deep to Space

·   NEWSAT

·   Operator: Digital Transformation in Industry with a Focus on the Operator 4.0

·   SNOB-5G: Scalable Network Backhauling for 5G

·   Transformer 4.0: Digital Revolution of Power Transformers

Sustainability plays a significant role in MPP — reflective of the value both Portugal and MIT place on environmental, energy, and climate solutions. Projects under the Sustainable Cities strategic area, for example, are “helping cities in Portugal to become more efficient and more sustainable,” Hansman says, noting that MPP’s influence is being felt in cities across the country and it is “having a big impact in terms of local city planning activities.”

Regarding energy, Hansman points to a previous MPP phase that focused on the Azores as an isolated energy ecosystem and investigated its ability to minimize energy use and become energy independent.

“That view of system-level energy use helped to stimulate activity on the mainland in Portugal, which has helped Portugal become a leader in various energy sources and made them less vulnerable in the last year or two,” Hansman says.

In the Oceans to Near Space strategic area, the K2D flagship project also emphasizes research into sustainability solutions, as well as resilience to environmental change. Over the past few years, K2D researchers in Portugal and MIT have worked together to develop components that permit cost-effective gathering of chemical, physical, biological, and environmental data from the ocean depths. One current project investigates the integration of autonomous underwater vehicles with subsea cables to enhance both environmental monitoring and hazard warning systems.

“The program has been very successful,” Hart says. “They are now deploying a 2-kilometer cable just south of Lisbon, which will be in place in another month or so. Portugal has been hit with tsunamis that caused tremendous devastation, and one of the objectives of these cables is to sense tsunamis. So, it’s an early warning system.”

As a leader in ocean technology with a long history of maritime discovery, Portugal provides many opportunities for MIT’s ocean researchers. Hart notes that the Portuguese military invites international researchers on board its ships, providing MIT with research opportunities that would be financially difficult otherwise.

Hansman adds that partnering with researchers in the Azores provides MIT with unique access to facilities and labs in the middle of the Atlantic Ocean. For example, Hart will be teaching at a marine robotics summer school in the Azores this July.

Cadence Payne, an MIT PhD candidate, is among those planning to attend. Through MPP’s AEROS project, Payne has helped develop a modular “cubesat” that will orbit over Portugal’s Exclusive Economic Zone collecting images and radio data to help define the ecological health of the country’s coastal waters. The nanosatellite is expected to launch in late 2023 or early 2024, says Payne, adding that it will be Portugal’s first cubesat mission.

“In monitoring the ocean, you’re monitoring the climate,” Payne says. “If you want to do work on detecting climate change and developing methods of mitigating climate change … it helps to integrate international collaboration,” she says, adding that, for students, “it’s been a really beautiful opportunity for us to see the benefits of collaboration.”

“I would say one of the main benefits of working with Portugal is that we share many interests in research in the sense that they’re very interested in climate change, sustainability, environmental impacts and those kinds of things,” says Hart. “They have turned out to be a very good strategic partner for MIT, and, hopefully, MIT for them.”

Source: Transatlantic connections make the difference for MIT Portugal

A Wild New Roller Coaster Opens in Georgia

- Posted in Uncategorized by

A once sleepy amusement park in Georgia has put itself on the map with the latest ride from an innovative company that has won the hearts of aficionados in thrill-seeking.

Source: A Wild New Roller Coaster Opens in Georgia

Community Schools: Fostering Innovation and Transformation

- Posted in Uncategorized by

By: David Greenberg & Dr. Linh Dang The challenges we face in today’s education landscape rarely have simple policy solutions. The youth mental health crisis, insufficient community and family engagement, and lack of access to early childhood learning are only a handful of the complex issues that require innovative strategies that extend beyond the school

Continue Reading

The post Community Schools: Fostering Innovation and Transformation appeared first on ED.gov Blog.

Source: Community Schools: Fostering Innovation and Transformation

Researchers uncover a new CRISPR-like system in animals that can edit the human genome

- Posted in Uncategorized by

A team of researchers led by Feng Zhang at the McGovern Institute for Brain Research at MIT and the Broad Institute of MIT and Harvard has uncovered the first programmable RNA-guided system in eukaryotes — organisms that include fungi, plants, and animals.

In a study published today in Nature, the team describes how the system is based on a protein called Fanzor. They showed that Fanzor proteins use RNA as a guide to target DNA precisely, and that Fanzors can be reprogrammed to edit the genome of human cells. The compact Fanzor systems have the potential to be more easily delivered to cells and tissues as therapeutics than CRISPR-Cas systems, and further refinements to improve their targeting efficiency could make them a valuable new technology for human genome editing.

CRISPR-Cas was first discovered in prokaryotes (bacteria and other single-cell organisms that lack nuclei) and scientists including those in Zhang’s lab have long wondered whether similar systems exist in eukaryotes. The new study demonstrates that RNA-guided DNA-cutting mechanisms are present across all kingdoms of life.

“CRISPR-based systems are widely used and powerful because they can be easily reprogrammed to target different sites in the genome,” says Zhang, senior author on the study, the James and Patricia Poitras Professor of Neuroscience in the MIT departments of Biological Engineering and Brain and Cognitive Sciences, an investigator at MIT’s McGovern Institute, a core institute member at the Broad Institute, and a Howard Hughes Medical Institute investigator. “This new system is another way to make precise changes in human cells, complementing the genome editing tools we already have.”

Searching the domains of life

A major aim of the Zhang lab is to develop genetic medicines using systems that can modulate human cells by targeting specific genes and processes. “A number of years ago, we started to ask, ‘What is there beyond CRISPR, and are there other RNA-programmable systems out there in nature?’” says Zhang.

Two years ago, Zhang lab members discovered a class of RNA-programmable systems in prokaryotes called OMEGAs, which are often linked with transposable elements, or “jumping genes,” in bacterial genomes and likely gave rise to CRISPR-Cas systems. That work also highlighted similarities between prokaryotic OMEGA systems and Fanzor proteins in eukaryotes, suggesting that the Fanzor enzymes might also use an RNA-guided mechanism to target and cut DNA.

In the new study, the researchers continued their work on RNA-guided systems by isolating Fanzors from fungi, algae, and amoeba species, in addition to a clam known as the northern quahog. Co-first author Makoto Saito of the Zhang lab led the biochemical characterization of the Fanzor proteins, showing that they are DNA-cutting endonuclease enzymes that use nearby non-coding RNAs known as ωRNAs to target particular sites in the genome. It is the first time this mechanism has been found in eukaryotes, such as animals.

Unlike CRISPR proteins, Fanzor enzymes are encoded in the eukaryotic genome within transposable elements, and the team’s phylogenetic analysis suggests that the Fanzor genes have migrated from bacteria to eukaryotes through so-called horizontal gene transfer.

“These OMEGA systems are more ancestral to CRISPR and they are among the most abundant proteins on the planet, so it makes sense that they have been able to hop back and forth between prokaryotes and eukaryotes,” says Saito.

No collateral damage

To explore Fanzor’s potential as a genome editing tool, the researchers demonstrated that it can generate insertions and deletions at targeted genome sites within human cells. The researchers found the Fanzor system to initially be less efficient at snipping DNA than CRISPR-Cas systems, but by systematic engineering, they introduced a combination of mutations into the protein that increased its activity 10-fold. Additionally, unlike some CRISPR systems and the OMEGA protein TnpB, the team found that a fungal-derived Fanzor protein did not exhibit “collateral activity,” where an RNA-guided enzyme cleaves its DNA target as well as degrading nearby DNA or RNA. The results suggest that Fanzors could potentially be developed as efficient genome editors.

Co-first author Peiyu Xu led an effort to analyze the molecular structure of the Fanzor/ωRNA complex and illustrate how it latches onto DNA to cut it. Fanzor shares structural similarities with its prokaryotic counterpart CRISPR-Cas12 protein, but the interaction between the ωRNA and the catalytic domains of Fanzor is more extensive, suggesting that the ωRNA might play a role in the catalytic reactions. “We are excited about these structural insights for helping us further engineer and optimize Fanzor for improved efficiency and precision as a genome editor,” said Xu.

Like CRISPR-based systems, the Fanzor system can be easily reprogrammed to target specific genome sites, and Zhang said it could one day be developed into a powerful new genome editing technology for research and therapeutic applications. The abundance of RNA-guided endonucleases like Fanzors further expands the number of OMEGA systems known across kingdoms of life and suggests that there are more yet to be found.

“Nature is amazing. There’s so much diversity,” says Zhang. “There are probably more RNA-programmable systems out there, and we’re continuing to explore and will hopefully discover more.”

The paper’s other authors include Guilhem Faure, Samantha Maguire, Soumya Kannan, Han Altae-Tran, Sam Vo, AnAn Desimone, and Rhiannon Macrae.

Support for this work was provided by the Howard Hughes Medical Institute; Poitras Center for Psychiatric Disorders Research at MIT; K. Lisa Yang and Hock E. Tan Molecular Therapeutics Center at MIT; Broad Institute Programmable Therapeutics Gift Donors; The Pershing Square Foundation, William Ackman, and Neri Oxman; James and Patricia Poitras; BT Charitable Foundation; Asness Family Foundation; Kenneth C. Griffin; the Phillips family; David Cheng; Robert Metcalfe; and Hugo Shong.

Source: Researchers uncover a new CRISPR-like system in animals that can edit the human genome

Gamifying medical data labeling to advance AI

- Posted in Uncategorized by

When Erik Duhaime PhD ’19 was working on his thesis in MIT’s Center for Collective Intelligence, he noticed his wife, then a medical student, spending hours studying on apps that offered flash cards and quizzes. His research had shown that, as a group, medical students could classify skin lesions more accurately than professional dermatologists; the trick was to continually measure each student’s performance on cases with known answers, throw out the opinions of people who were bad at the task, and intelligently pool the opinions of people that were good.

Combining his wife’s studying habits with his research, Duhaime founded Centaur Labs, a company that created a mobile app called DiagnosUs to gather the opinions of medical experts on real-world scientific and biomedical data. Through the app, users review anything from images of potentially cancerous skin lesions or audio clips of heart and lung sounds that could indicate a problem. If the users are accurate, Centaur uses their opinions and awards them small cash prizes. Those opinions, in turn, help medical AI companies train and improve their algorithms.

The approach combines the desire of medical experts to hone their skills with the desperate need for well-labeled medical data by companies using AI for biotech, developing pharmaceuticals, or commercializing medical devices.

“I realized my wife’s studying could be productive work for AI developers,” Duhaime recalls. “Today we have tens of thousands of people using our app, and about half are medical students who are blown away that they win money in the process of studying. So, we have this gamified platform where people are competing with each other to train data and winning money if they’re good and improving their skills at the same time — and by doing that they are labeling data for teams building life saving AI.”

Gamifying medical labeling

Duhaime completed his PhD under Thomas Malone, the Patrick J. McGovern Professor of Management and founding director of the Center for Collective Intelligence.

“What interested me was the wisdom of crowds phenomenon,” Duhaime says. “Ask a bunch of people how many jelly beans are in a jar, and the average of everybody’s answer is pretty close. I was interested in how you navigate that problem in a task that requires skill or expertise. Obviously you don’t just want to ask a bunch of random people if you have cancer, but at the same time, we know that second opinions in health care can be extremely valuable. You can think of our platform as a supercharged way of getting a second opinion.”

Duhaime began exploring ways to leverage collective intelligence to improve medical diagnoses. In one experiment, he trained groups of lay people and medical school students that he describes as “semiexperts” to classify skin conditions, finding that by combining the opinions of the highest performers he could outperform professional dermatologists. He also found that by combining algorithms trained to detect skin cancer with the opinions of experts, he could outperform either method on its own.

“The core insight was you do two things,” Duhaime explains. “The first thing is to measure people’s performance — which sounds obvious, but even in the medical domain it isn’t done much. If you ask a dermatologist if they’re good, they say, ‘Yeah of course, I’m a dermatologist.’ They don’t necessarily know how good they are at specific tasks. The second thing is that when you get multiple opinions, you need to identify complementarities between the different people. You need to recognize that expertise is multidimensional, so it’s a little more like putting together the optimal trivia team than it is getting the five people who are all the best at the same thing. For example, one dermatologist might be better at identifying melanoma, whereas another might be better at classifying the severity of psoriasis.”

While still pursuing his PhD, Duhaime founded Centaur and began using MIT’s entrepreneurial ecosystem to further develop the idea. He received funding from MIT’s Sandbox Innovation Fund in 2017 and participated in the delta v startup accelerator run by the Martin Trust Center for MIT Entrepreneurship over the summer of 2018. The experience helped him get into the prestigious Y Combinator accelerator later that year.

The DiagnosUs app, which Duhaime developed with Centaur co-founders Zach Rausnitz and Tom Gellatly, is designed to help users test and improve their skills. Duhaime says about half of users are medical school students and the other half are mostly doctors, nurses, and other medical professionals.

“It’s better than studying for exams, where you might have multiple choice questions,” Duhaime says. “They get to see actual cases and practice.”

Centaur gathers millions of opinions every week from tens of thousands of people around the world. Duhaime says most people earn coffee money, although the person who’s earned the most from the platform is a doctor in eastern Europe who’s made around $10,000.

“People can do it on the couch, they can do it on the T,” Duhaime says. “It doesn’t feel like work — it’s fun.”

The approach stands in sharp contrast to traditional data labeling and AI content moderation, which are typically outsourced to low-resource countries.

Centaur’s approach produces accurate results, too. In a paper with researchers from Brigham and Women’s Hospital, Massachusetts General Hospital (MGH), and Eindhoven University of Technology, Centaur showed its crowdsourced opinions labeled lung ultrasounds as reliably as experts did. Another study with researchers at Memorial Sloan Kettering showed crowdsourced labeling of dermoscopic images was more accurate than that of highly experienced dermatologists. Beyond images, Centaur’s platform also works with video, audio, text from sources like research papers or anonymized conversations between doctors and patients, and waves from electroencephalograms (EEGs) and electrocardiographys (ECGs).

Finding the experts

Centaur has found that the best performers come from surprising places. In 2021, to collect expert opinions on EEG patterns, researchers held a contest through the DiagnosUs app at a conference featuring about 50 epileptologists, each with more than 10 years of experience. The organizers made a custom shirt to give to the contest’s winner, who they assumed would be in attendance at the conference.

But when the results came in, a pair of medical students in Ghana, Jeffery Danquah and Andrews Gyabaah, had beaten everyone in attendance. The highest-ranked conference attendee had come in ninth.

“I started by doing it for the money, but I realized it actually started helping me a lot,” Gyabaah told Centaur’s team later. “There were times in the clinic where I realized that I was doing better than others because of what I learned on the DiagnosUs app.”

As AI continues to change the nature of work, Duhaime believes Centaur Labs will be used as an ongoing check on AI models.

“Right now, we’re helping people train algorithms primarily, but increasingly I think we’ll be used for monitoring algorithms and in conjunction with algorithms, basically serving as the humans in the loop for a range of tasks,” Duhaime says. “You might think of us less as a way to train AI and more as a part of the full life cycle, where we’re providing feedback on models’ outputs or monitoring the model.”

Duhaime sees the work of humans and AI algorithms becoming increasingly integrated and believes Centaur Labs has an important role to play in that future.

“It’s not just train algorithm, deploy algorithm,” Duhaime says. “Instead, there will be these digital assembly lines all throughout the economy, and you need on-demand expert human judgment infused in different places along the value chain.”

Source: Gamifying medical data labeling to advance AI

Page 1 of 7