MIT engineers create an energy-storing supercapacitor from ancient materials

- Posted in Uncategorized by

Two of humanity's most ubiquitous historical materials, cement and carbon black (which resembles very fine charcoal), may form the basis for a novel, low-cost energy storage system, according to a new study. The technology could facilitate the use of renewable energy sources such as solar, wind, and tidal power by allowing energy networks to remain stable despite fluctuations in renewable energy supply.

The two materials, the researchers found, can be combined with water to make a supercapacitor — an alternative to batteries — that could provide storage of electrical energy. As an example, the MIT researchers who developed the system say that their supercapacitor could eventually be incorporated into the concrete foundation of a house, where it could store a full day’s worth of energy while adding little (or no) to the cost of the foundation and still providing the needed structural strength. The researchers also envision a concrete roadway that could provide contactless recharging for electric cars as they travel over that road.

The simple but innovative technology is described this week in the journal PNAS, in a paper by MIT professors Franz-Josef Ulm, Admir Masic, and Yang-Shao Horn, and four others at MIT and at the Wyss Institute for Biologically Inspired Engineering.

Capacitors are in principle very simple devices, consisting of two electrically conductive plates immersed in an electrolyte and separated by a membrane. When a voltage is applied across the capacitor, positively charged ions from the electrolyte accumulate on the negatively charged plate, while the positively charged plate accumulates negatively charged ions. Since the membrane in between the plates blocks charged ions from migrating across, this separation of charges creates an electric field between the plates, and the capacitor becomes charged. The two plates can maintain this pair of charges for a long time and then deliver them very quickly when needed. Supercapacitors are simply capacitors that can store exceptionally large charges.

The amount of power a capacitor can store depends on the total surface area of its conductive plates. The key to the new supercapacitors developed by this team comes from a method of producing a cement-based material with an extremely high internal surface area due to a dense, interconnected network of conductive material within its bulk volume. The researchers achieved this by introducing carbon black — which is highly conductive — into a concrete mixture along with cement powder and water, and letting it cure. The water naturally forms a branching network of openings within the structure as it reacts with cement, and the carbon migrates into these spaces to make wire-like structures within the hardened cement. These structures have a fractal-like structure, with larger branches sprouting smaller branches, and those sprouting even smaller branchlets, and so on, ending up with an extremely large surface area within the confines of a relatively small volume. The material is then soaked in a standard electrolyte material, such as potassium chloride, a kind of salt, which provides the charged particles that accumulate on the carbon structures. Two electrodes made of this material, separated by a thin space or an insulating layer, form a very powerful supercapacitor, the researchers found.

The two plates of the capacitor function just like the two poles of a rechargeable battery of equivalent voltage: When connected to a source of electricity, as with a battery, energy gets stored in the plates, and then when connected to a load, the electrical current flows back out to provide power.

“The material is fascinating,” Masic says, “because you have the most-used manmade material in the world, cement, that is combined with carbon black, that is a well-known historical material — the Dead Sea Scrolls were written with it. You have these at least two-millennia-old materials that when you combine them in a specific manner you come up with a conductive nanocomposite, and that’s when things get really interesting.”

As the mixture sets and cures, he says, “The water is systematically consumed through cement hydration reactions, and this hydration fundamentally affects nanoparticles of carbon because they are hydrophobic (water repelling).” As the mixture evolves, “the carbon black is self-assembling into a connected conductive wire,” he says. The process is easily reproducible, with materials that are inexpensive and readily available anywhere in the world. And the amount of carbon needed is very small — as little as 3 percent by volume of the mix — to achieve a percolated carbon network, Masic says.

Supercapacitors made of this material have great potential to aid in the world’s transition to renewable energy, Ulm says. The principal sources of emissions-free energy, wind, solar, and tidal power, all produce their output at variable times that often do not correspond to the peaks in electricity usage, so ways of storing that power are essential. “There is a huge need for big energy storage,” he says, and existing batteries are too expensive and mostly rely on materials such as lithium, whose supply is limited, so cheaper alternatives are badly needed. “That’s where our technology is extremely promising, because cement is ubiquitous,” Ulm says.

The team calculated that a block of nanocarbon-black-doped concrete that is 45 cubic meters (or yards) in size — equivalent to a cube about 3.5 meters across — would have enough capacity to store about 10 kilowatt-hours of energy, which is considered the average daily electricity usage for a household. Since the concrete would retain its strength, a house with a foundation made of this material could store a day’s worth of energy produced by solar panels or windmills and allow it to be used whenever it’s needed. And, supercapacitors can be charged and discharged much more rapidly than batteries.

After a series of tests used to determine the most effective ratios of cement, carbon black, and water, the team demonstrated the process by making small supercapacitors, about the size of some button-cell batteries, about 1 centimeter across and 1 millimeter thick, that could each be charged to 1 volt, comparable to a 1-volt battery. They then connected three of these to demonstrate their ability to light up a 3-volt light-emitting diode (LED). Having proved the principle, they now plan to build a series of larger versions, starting with ones about the size of a typical 12-volt car battery, then working up to a 45-cubic-meter version to demonstrate its ability to store a house-worth of power.

There is a tradeoff between the storage capacity of the material and its structural strength, they found. By adding more carbon black, the resulting supercapacitor can store more energy, but the concrete is slightly weaker, and this could be useful for applications where the concrete is not playing a structural role or where the full strength-potential of concrete is not required. For applications such as a foundation, or structural elements of the base of a wind turbine, the “sweet spot” is around 10 percent carbon black in the mix, they found.

Another potential application for carbon-cement supercapacitors is for building concrete roadways that could store energy produced by solar panels alongside the road and then deliver that energy to electric vehicles traveling along the road using the same kind of technology used for wirelessly rechargeable phones. A related type of car-recharging system is already being developed by companies in Germany and the Netherlands, but using standard batteries for storage.

Initial uses of the technology might be for isolated homes or buildings or shelters far from grid power, which could be powered by solar panels attached to the cement supercapacitors, the researchers say.

Ulm says that the system is very scalable, as the energy-storage capacity is a direct function of the volume of the electrodes. “You can go from 1-millimeter-thick electrodes to 1-meter-thick electrodes, and by doing so basically you can scale the energy storage capacity from lighting an LED for a few seconds, to powering a whole house,” he says.

Depending on the properties desired for a given application, the system could be tuned by adjusting the mixture. For a vehicle-charging road, very fast charging and discharging rates would be needed, while for powering a home “you have the whole day to charge it up,” so slower-charging material could be used, Ulm says.

“So, it’s really a multifunctional material,” he adds. Besides its ability to store energy in the form of supercapacitors, the same kind of concrete mixture can be used as a heating system, by simply applying electricity to the carbon-laced concrete.

Ulm sees this as “a new way of looking toward the future of concrete as part of the energy transition.”

The research team also included postdocs Nicolas Chanut and Damian Stefaniuk at MIT’s Department of Civil and Environmental Engineering, James Weaver at the Wyss Institute, and Yunguang Zhu in MIT’s Department of Mechanical Engineering. The work was supported by the MIT Concrete Sustainability Hub, with sponsorship by the Concrete Advancement Foundation.

Source: MIT engineers create an energy-storing supercapacitor from ancient materials

Using AI to protect against AI image manipulation

- Posted in Uncategorized by

As we enter a new era where technologies powered by artificial intelligence can craft and manipulate images with a precision that blurs the line between reality and fabrication, the specter of misuse looms large. Recently, advanced generative models such as DALL-E and Midjourney, celebrated for their impressive precision and user-friendly interfaces, have made the production of hyper-realistic images relatively effortless. With the barriers of entry lowered, even inexperienced users can generate and manipulate high-quality images from simple text descriptions — ranging from innocent image alterations to malicious changes. Techniques like watermarking pose a promising solution, but misuse requires a preemptive (as opposed to only post hoc) measure. 

In the quest to create such a new measure, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) developed “PhotoGuard,” a technique that uses perturbations — minuscule alterations in pixel values invisible to the human eye but detectable by computer models — that effectively disrupt the model’s ability to manipulate the image.

PhotoGuard uses two different “attack” methods to generate these perturbations. The more straightforward “encoder” attack targets the image’s latent representation in the AI model, causing the model to perceive the image as a random entity. The more sophisticated “diffusion” one defines a target image and optimizes the perturbations to make the final image resemble the target as closely as possible.

“Consider the possibility of fraudulent propagation of fake catastrophic events, like an explosion at a significant landmark. This deception can manipulate market trends and public sentiment, but the risks are not limited to the public sphere. Personal images can be inappropriately altered and used for blackmail, resulting in significant financial implications when executed on a large scale,” says Hadi Salman, an MIT graduate student in electrical engineering and computer science (EECS), affiliate of MIT CSAIL, and lead author of a new paper about PhotoGuard

“In more extreme scenarios, these models could simulate voices and images for staging false crimes, inflicting psychological distress and financial loss. The swift nature of these actions compounds the problem. Even when the deception is eventually uncovered, the damage — whether reputational, emotional, or financial — has often already happened. This is a reality for victims at all levels, from individuals bullied at school to society-wide manipulation.”

PhotoGuard in practice

AI models view an image differently from how humans do. It sees an image as a complex set of mathematical data points that describe every pixel's color and position — this is the image's latent representation. The encoder attack introduces minor adjustments into this mathematical representation, causing the AI model to perceive the image as a random entity. As a result, any attempt to manipulate the image using the model becomes nearly impossible. The changes introduced are so minute that they are invisible to the human eye, thus preserving the image's visual integrity while ensuring its protection.

The second and decidedly more intricate “diffusion” attack strategically targets the entire diffusion model end-to-end. This involves determining a desired target image, and then initiating an optimization process with the intention of closely aligning the generated image with this preselected target.

In implementing, the team created perturbations within the input space of the original image. These perturbations are then used during the inference stage, and applied to the images, offering a robust defense against unauthorized manipulation.

“The progress in AI that we are witnessing is truly breathtaking, but it enables beneficial and malicious uses of AI alike,” says MIT professor of EECS and CSAIL principal investigator Aleksander Madry, who is also an author on the paper. “It is thus urgent that we work towards identifying and mitigating the latter. I view PhotoGuard as our small contribution to that important effort.”

The diffusion attack is more computationally intensive than its simpler sibling, and requires significant GPU memory. The team says that approximating the diffusion process with fewer steps mitigates the issue, thus making the technique more practical.

To better illustrate the attack, consider an art project, for example. The original image is a drawing, and the target image is another drawing that’s completely different. The diffusion attack is like making tiny, invisible changes to the first drawing so that, to an AI model, it begins to resemble the second drawing. However, to the human eye, the original drawing remains unchanged.

By doing this, any AI model attempting to modify the original image will now inadvertently make changes as if dealing with the target image, thereby protecting the original image from intended manipulation. The result is a picture that remains visually unaltered for human observers, but protects against unauthorized edits by AI models.

As far as a real example with PhotoGuard, consider an image with multiple faces. You could mask any faces you don’t want to modify, and then prompt with “two men attending a wedding.” Upon submission, the system will adjust the image accordingly, creating a plausible depiction of two men participating in a wedding ceremony.

Now, consider safeguarding the image from being edited; adding perturbations to the image before upload can immunize it against modifications. In this case, the final output will lack realism compared to the original, non-immunized image.

All hands on deck

Key allies in the fight against image manipulation are the creators of the image-editing models, says the team. For PhotoGuard to be effective, an integrated response from all stakeholders is necessary. “Policymakers should consider implementing regulations that mandate companies to protect user data from such manipulations. Developers of these AI models could design APIs that automatically add perturbations to users’ images, providing an added layer of protection against unauthorized edits,” says Salman.

Despite PhotoGuard’s promise, it’s not a panacea. Once an image is online, individuals with malicious intent could attempt to reverse engineer the protective measures by applying noise, cropping, or rotating the image. However, there is plenty of previous work from the adversarial examples literature that can be utilized here to implement robust perturbations that resist common image manipulations.

“A collaborative approach involving model developers, social media platforms, and policymakers presents a robust defense against unauthorized image manipulation. Working on this pressing issue is of paramount importance today,” says Salman. “And while I am glad to contribute towards this solution, much work is needed to make this protection practical. Companies that develop these models need to invest in engineering robust immunizations against the possible threats posed by these AI tools. As we tread into this new era of generative models, let’s strive for potential and protection in equal measures.”

“The prospect of using attacks on machine learning to protect us from abusive uses of this technology is very compelling,” says Florian Tramèr, an assistant professor at ETH Zürich. “The paper has a nice insight that the developers of generative AI models have strong incentives to provide such immunization protections to their users, which could even be a legal requirement in the future. However, designing image protections that effectively resist circumvention attempts is a challenging problem: Once the generative AI company commits to an immunization mechanism and people start applying it to their online images, we need to ensure that this protection will work against motivated adversaries who might even use better generative AI models developed in the near future. Designing such robust protections is a hard open problem, and this paper makes a compelling case that generative AI companies should be working on solving it.”

Salman wrote the paper alongside fellow lead authors Alaa Khaddaj and Guillaume Leclerc MS ’18, as well as Andrew Ilyas ’18, MEng ’18; all three are EECS graduate students and MIT CSAIL affiliates. The team’s work was partially done on the MIT Supercloud compute cluster, supported by U.S. National Science Foundation grants and Open Philanthropy, and based upon work supported by the U.S. Defense Advanced Research Projects Agency. It was presented at the International Conference on Machine Learning this July.

Source: Using AI to protect against AI image manipulation

A wearable ultrasound scanner could detect breast cancer earlier

- Posted in Uncategorized by

When breast cancer is diagnosed in the earliest stages, the survival rate is nearly 100 percent. However, for tumors detected in later stages, that rate drops to around 25 percent.

In hopes of improving the overall survival rate for breast cancer patients, MIT researchers have designed a wearable ultrasound device that could allow people to detect tumors when they are still in early stages. In particular, it could be valuable for patients at high risk of developing breast cancer in between routine mammograms.

The device is a flexible patch that can be attached to a bra, allowing the wearer to move an ultrasound tracker along the patch and image the breast tissue from different angles. In the new study, the researchers showed that they could obtain ultrasound images with resolution comparable to that of the ultrasound probes used in medical imaging centers.

“We changed the form factor of the ultrasound technology so that it can be used in your home. It’s portable and easy to use, and provides real-time, user-friendly monitoring of breast tissue,” says Canan Dagdeviren, an associate professor in MIT’s Media Lab and the senior author of the study.

MIT graduate student Wenya Du, Research Scientist Lin Zhang, Emma Suh ’23, and Dabin Lin, a professor at Xi’an Technological University, are the lead authors of the paper, which appears today in Science Advances.

A wearable diagnostic

For this project, Dagdeviren drew inspiration from her late aunt, Fatma Caliskanoglu, who was diagnosed with late-stage breast cancer at age 49, despite having regular cancer screens, and passed away six months later. At her aunt’s bedside, Dagdeviren, then a postdoc at MIT, drew up a rough schematic of a diagnostic device that could be incorporated into a bra and would allow for more frequent screening of individuals at high risk for breast cancer. 

Breast tumors that develop in between regularly scheduled mammograms — known as interval cancers — account for 20 to 30 percent of all breast cancer cases, and these tumors tend to be more aggressive than those found during routine scans.

“My goal is to target the people who are most likely to develop interval cancer,” says Dagdeviren, whose research group specializes in developing wearable electronic devices that conform to the body. “With more frequent screening, our goal to increase the survival rate to up to 98 percent.”

To make her vision of a diagnostic bra a reality, Dagdeviren designed a miniaturized ultrasound scanner that could allow the user to perform imaging at any time. This scanner is based on the same kind of ultrasound technology used in medical imaging centers, but incorporates a novel piezoelectric material that allowed the researchers to miniaturize the ultrasound scanner.

To make the device wearable, the researchers designed a flexible, 3D-printed patch, which has honeycomb-like openings. Using magnets, this patch can be attached to a bra that has openings that allow the ultrasound scanner to contact the skin. The ultrasound scanner fits inside a small tracker that can be moved to six different positions, allowing the entire breast to be imaged. The scanner can also be rotated to take images from different angles, and does not require any special expertise to operate.

“This technology provides a fundamental capability in the detection and early diagnosis of breast cancer, which is key to a positive outcome,” says Anantha Chandrakasan, dean of MIT’s School of Engineering, the Vannevar Bush Professor of Electrical Engineering and Computer Science, and one of the authors of the study. “This work will significantly advance ultrasound research and medical device designs, leveraging advances in materials, low-power circuits, AI algorithms, and biomedical systems.”

Early detection

Working with the MIT Center for Clinical and Translational Research, the researchers tested their device on one human subject, a 71-year-old woman with a history of breast cysts. Using the new device, the researchers were able to detect the cysts, which were as small as 0.3 centimeters in diameter — the size of early-stage tumors. They also showed that the device achieved resolution comparable to that of traditional ultrasound, and tissue can be imaged at a depth up to 8 centimeters.

“Access to quality and affordable health care is essential for early detection and diagnosis. As a nurse I have witnessed the negative outcomes of a delayed diagnosis. This technology holds the promise of breaking down the many barriers for early breast cancer detection by providing a more reliable, comfortable, and less intimidating diagnostic,” says Catherine Ricciardi, nurse director at MIT’s Center for Clinical and Translational Research and an author of the study.

To see the ultrasound images, the researchers currently have to connect their scanner to the same kind of ultrasound machine used in imaging centers. However, they are now working on a miniaturized version of the imaging system that would be about the size of a smartphone.

The wearable ultrasound patch can be used over and over, and the researchers envision that it could be used at home by people who are at high risk for breast cancer and could benefit from frequent screening. It could also help diagnose cancer in people who don’t have regular access to screening.

“Breast cancer is the most common cancer among women, and it is treatable when detected early,” says Tolga Ozmen, a breast cancer surgeon at Massachusetts General Hospital who is also an author of the study. “One of the main obstacles in imaging and early detection is the commute that the women have to make to an imaging center. This conformable ultrasound patch is a highly promising technology as it eliminates the need for women to travel to an imaging center.”

The researchers hope to develop a workflow so that once data are gathered from a subject, artificial intelligence can be used to analyze how the images change over time, which could offer more accurate diagnostics than relying on the assessment of a radiologist comparing images taken years apart. They also plan to explore adapting the ultrasound technology to scan other parts of the body.

The research was funded, in part, by the National Science Foundation, a 3M Non-Tenured Faculty Award, the Sagol Weizmann-MIT Bridge Program, and MIT Media Lab Consortium Funding.

Source: A wearable ultrasound scanner could detect breast cancer earlier

Changing attitudes about jobs and gender in India

- Posted in Uncategorized by

As a high school student who loved math, Lisa Ho ’17 was drawn by MIT’s spirit of “mens et manus” (“mind and hand”) and the opportunities to study both a subject and its practical applications. Now a PhD candidate in economics, Ho also appreciates the lessons in perseverance gleaned from her time on her high school robotics team that have translated to her current studies.

“It was the first time I was heavily invested in a project where the challenge was open-ended with no correct answer, so you were never ‘done’ with the work,” says Ho. “Effort didn’t necessarily translate into external validation. But I think that helped me to develop some patience and appetite for ambiguous work that doesn’t always pay off quickly.”

Labor and gender

When she first arrived at MIT, Ho was looking for a way to apply her interest in math to tackling social issues, and she initially settled on computer science. But as a junior, she took a new class in the Department of Economics offered by professors Esther Duflo and Sara Ellison, 14.31 (Data Analysis for Social Scientists), which piqued her interest in an entirely new aspect of numbers.

“I had a sense that I wanted to apply statistics and coding to study social issues,” explains Ho. “What I didn’t anticipate was that taking the class would teach me about what economics could be. Before that class, I thought that economists studied a much narrower set of topics.”

One study that Ho remembers learning about in that class examined gender-related differences in whether candidates who lose elections continue their political careers. She was struck by how economic principles and data analysis could be used to address a huge variety of questions about society.

For her dissertation, Ho is studying the intersection of gender and labor-force participation in India, where there’s a particularly large gap between men and women. So far, Ho’s research has found that most available jobs are not compatible with the expectations of domestic responsibilities that many women face.

Given that finding, “Two strategies come to mind,” explains Ho. “You can either try to change people’s attitudes around gendered divisions of labor at home so that women can take the jobs that are available, or you could try to change the jobs to be better-suited to people’s attitudes.”

Ho focuses on the latter strategy, noting that attitudes and behavior mutually inform each other, and so short-term, part-time jobs that shift attitudes might serve as a stepping stone to more intensive labor market involvement. She spent much of 2021-2022 in West Bengal running a randomized controlled trial with over 1,500 households. With the help of her co-authors Anahita Karandikar and Suhani Jalota, along with a 25-person field team, Ho offered her study participants a set of jobs with different flexible arrangements, varying these attributes experimentally to test which ones made the job most appealing to women who would otherwise be outside the labor force.

Some factors investigated included the ability to multitask between work and childcare, flexibility to choose work hours, and the ability to work from home. To estimate the causal effect of women’s employment, jobs were offered to a randomly selected subset of survey participants. Then, Ho and her team evaluated whether having job experience influenced attitudes and made households more open to women’s work. They also studied how workplace flexibility impacted job performance. Coordinating the logistics for a large-scale study was difficult at times, she says, but connecting with the study participants made it all worth it.

“I love doing research and having my research be related to what’s on people’s minds day-to-day,” says Ho. “One of the most enjoyable parts of the project on women’s employment for me was accompanying my field team while they conducted surveys. Many of the questions we ask stir up passionate discussion, like the question about whether men should do an equal share of the housework.”

Part of Ho’s interest in this area is personal. Growing up, her grandmother told her how her mother, who lived in the Bronx, had wanted to work but wasn’t allowed to.

“My great-grandfather wouldn’t allow it because he was worried about what other people would think,” says Ho, who was born in Singapore and spent most of her school years in the U.K. “He said that if his wife worked, other people would think he couldn’t support his family. Even looking within my own family, I can see there’s been a lot of progress in the last century, but there’s still a ton of work to be done with respect to women all over the world.”

Ultimately, Ho’s family and upbringing — one of her parents is Singaporean and the other is American — helped her to develop a broad perspective that she now utilizes when trying to answer her research questions. In addition to her global experiences as a child and an undergraduate, Ho spent a year as a Schwarzman Scholar at Tsinghua University in China before returning to MIT for graduate school.

“I’ve had a lot of exposure to different cultures and points of view, and that helps me in my research in terms of feeling at home among any group of people,” she says. “And it means I’m already used to a wide range of views on these topics, which keeps me open-minded when I listen to our study participants.”

A passion for teaching

Throughout her time at MIT, Ho has nurtured a passion for teaching and mentorship. As an undergraduate, she worked with the Educational Studies Program to organize outreach activities for middle and high school students. And she has enjoyed her experience as a teaching assistant during her graduate program.  

“Part of why I feel like TA’ing MIT undergrads is so rewarding is because I used to be them and now I’m on the other side,” she says. “They’re still trying to figure out what they’re interested in and what they want to do, which makes it feel very impactful to introduce them to new topics and to show them — just like I was shown as an undergrad! — that economics is a much broader field than most college students think when they graduate from high school.”

As she approaches the completion of her degree, Ho, too, is giving thought to what comes next. She wants to have the freedom to pursue the questions that move her the most. For now, she is making the most of her time at MIT and the opportunity to learn from the professors who inspired her to pursue economics in the first place. “I want to be an economist whose work is grounded in challenges that come up in people’s everyday lives — especially women’s and children’s,” she says. “To contribute to pragmatic policy solutions to those problems, being trained by professors in the MIT Development Economics group, such as Esther Duflo, Ben Olken, Frank Schilbach, David Atkin, and Abhijit Banerjee — it’s the ideal graduate school experience.”

Source: Changing attitudes about jobs and gender in India

School of Engineering second quarter 2023 awards

- Posted in Uncategorized by

Faculty and researchers across MIT’s School of Engineering receive many awards in recognition of their scholarship, service, and overall excellence. The School of Engineering periodically recognizes their achievements by highlighting the honors, prizes, and medals won by faculty and research scientists working in our academic departments, labs, and centers.
 

Source: School of Engineering second quarter 2023 awards

Making sense of cell fate

- Posted in Uncategorized by

Despite the proliferation of novel therapies such as immunotherapy or targeted therapies, radiation and chemotherapy remain the frontline treatment for cancer patients. About half of all patients still receive radiation and 60-80 percent receive chemotherapy.

Both radiation and chemotherapy work by damaging DNA, taking advantage of a vulnerability specific to cancer cells. Healthy cells are more likely to survive radiation and chemotherapy since their mechanisms for identifying and repairing DNA damage are intact. In cancer cells, these repair mechanisms are compromised by mutations. When cancer cells cannot adequately respond to the DNA damage caused by radiation and chemotherapy, ideally, they undergo apoptosis or die by other means.

However, there is another fate for cells after DNA damage: senescence — a state where cells survive, but stop dividing. Senescent cells’ DNA has not been damaged enough to induce apoptosis but is too damaged to support cell division. While senescent cancer cells themselves are unable to proliferate and spread, they are bad actors in the fight against cancer because they seem to enable other cancer cells to develop more aggressively. Although a cancer cell’s fate is not apparent until a few days after treatment, the decision to survive, die, or enter senescence is made much earlier. But, precisely when and how that decision is made has not been well understood.

In a study of ovarian and osteosarcoma cancer cells appearing July 19 in Cell Systems, MIT researchers show that cell signaling proteins commonly associated with cell proliferation and apoptosis instead commit cancer cells to senescence within 12 hours of treatment with low doses of certain kinds of chemotherapy.

“When it comes to treating cancer, this study underscores that it’s important not to think too linearly about cell signaling,” says Michael Yaffe, who is a David H. Koch Professor of Science at MIT, the director of the MIT Center for Precision Cancer Medicine, a member of MIT’s Koch Institute for Integrative Cancer Research, and the senior author of the study. “If you assume that a particular treatment will always affect cancer cell signaling in the same way — you may be setting yourself up for many surprises, and treating cancers with the wrong combination of drugs.”

Using a combination of experiments with cancer cells and computational modeling, the team investigated the cell signaling mechanisms that prompt cancer cells to enter senescence after treatment with a commonly used anti-cancer agent. Their efforts singled out two protein kinases and a component of the AP-1 transcription factor complex as highly associated with the induction of senescence after DNA damage, despite the well-established roles for all of these molecules in promoting cell proliferation in cancer.

The researchers treated cancer cells with low and high doses of doxorubicin, a chemotherapy that interferes with the function with topoisomerase II, an enzyme that breaks and then repairs DNA strands during replication to fix tangles and other topological problems.

By measuring the effects of DNA damage on single cells at several time points ranging from six hours to four days after the initial exposure, the team created two datasets. In one dataset, the researchers tracked cell fate over time. For the second set, researchers measured relative cell signaling activity levels across a variety of proteins associated with responses to DNA damage or cellular stress, determination of cell fate, and progress through cell growth and division.

The two datasets were used to build a computational model that identifies correlations between time, dosage, signal, and cell fate. The model identified the activities of the MAP kinases Erk and JNK, and the transcription factor c-Jun as key components of the AP-1 protein likewise understood to involved in the induction of senescence. The researchers then validated these computational findings by showing that inhibition of JNK and Erk after DNA damage successfully prevented cells from entering senescence.

The researchers leveraged JNK and Erk inhibition to pinpoint exactly when cells made the decision to enter senescence. Surprisingly, they found that the decision to enter senescence was made within 12 hours of DNA damage, even though it took days to actually see the senescent cells accumulate. The team also found that with the passage of more time, these MAP kinases took on a different function: promoting the secretion of proinflammatory proteins called cytokines that are responsible for making other cancer cells proliferate and develop resistance to chemotherapy.

“Proteins like cytokines encourage ‘bad behavior’ in neighboring tumor cells that lead to more aggressive cancer progression,” says Tatiana Netterfield, a graduate student in the Yaffe lab and the lead author of the study. “Because of this, it is thought that senescent cells that stay near the tumor for long periods of time are detrimental to treating cancer.”

This study’s findings apply to cancer cells treated with a commonly used type of chemotherapy that stalls DNA replication after repair. But more broadly, the study emphasizes that “when treating cancer, it’s extremely important to understand the molecular characteristics of cancer cells and the contextual factors such as time and dosing that determine cell fate,” explains Netterfield.

The study, however, has more immediate implications for treatments that are already in use. One class of Erk inhibitors, MEK inhibitors, are used in the clinic with the expectation that they will curb cancer growth.

“We must be cautious about administering MEK inhibitors together with chemotherapies,” says Yaffe. “The combination may have the unintended effect of driving cells into proliferation, rather than senescence.”

In future work, the team will perform studies to understand how and why individual cells choose to proliferate instead of enter senescence. Additionally, the team is employing next-generation sequencing to understand which genes c-Jun is regulating in order to push cells toward senescence.

This study was funded, in part, by the Charles and Marjorie Holloway Foundation and the MIT Center for Precision Cancer Medicine.

Source: Making sense of cell fate

How forests can cut carbon, restore ecosystems, and create jobs

- Posted in Uncategorized by

To limit the frequency and severity of droughts, wildfires, flooding, and other adverse consequences of climate change, nearly 200 countries committed to the Paris Agreement’s long-term goal of keeping global warming well below 2 degrees Celsius. According to the latest United Nations Intergovernmental Panel on Climate Change (IPCC) Report, achieving that goal will require both large-scale greenhouse gas (GHG) emissions reduction and removal of GHGs from the atmosphere.

At present, the most efficient and scalable GHG-removal strategy is the massive planting of trees through reforestation or afforestation — a “natural climate solution” (NCS) that extracts atmospheric carbon dioxide through photosynthesis and soil carbon sequestration.

Despite the potential of forestry-based NCS projects to address climate change, biodiversity loss, unemployment, and other societal needs — and their appeal to policymakers, funders, and citizens — they have yet to achieve critical mass, and often underperform due to a mix of interacting ecological, social, and financial constraints. To better understand these challenges and identify opportunities to overcome them, a team of researchers at Imperial College London and the MIT Joint Program on the Science and Policy of Global Change recently studied how environmental scientists, local stakeholders, and project funders perceive the risks and benefits of NCS projects, and how these perceptions impact project goals and performance. To that end, they surveyed and consulted with dozens of recognized experts and organizations spanning the fields of ecology, finance, climate policy, and social science.

The team’s analysis, which appears in the journal Frontiers in Climate, found two main factors that have hindered the success of forestry-based NCS projects.

First, the ambition — levels of carbon removal, ecosystem restoration, job creation, and other environmental and social targets — of selected NCS projects is limited by funders’ perceptions of their overall risk. Among other things, funders aim to minimize operational risk (e.g., Will newly planted trees survive and grow?), political risk (e.g., Just how secure is their access to the land where trees will be planted?); and reputational risk (e.g., Will the project be perceived as an exercise in “greenwashing,” or fall way short of its promised environmental and social benefits?). Funders seeking a financial return on their initial investment are also concerned about the dependability of complex monitoring, reporting, and verification methods used to quantify atmospheric carbon removal, biodiversity gains, and other metrics of project performance.

Second, the environmental and social benefits of NCS projects are unlikely to be realized unless the local communities impacted by these projects are granted ownership over their implementation and outcomes. But while engaging with local communities is critical to project performance, it can be challenging both legally and financially to set up incentives (e.g., payment and other forms of compensation) to mobilize such engagement.

“Many carbon offset projects raise legitimate concerns about their effectiveness,” says study lead author Bonnie Waring, a senior lecturer at the Grantham Institute on Climate Change and the Environment, Imperial College London. “However, if nature climate solution projects are done properly, they can help with sustainable development and empower local communities.”

Drawing on surveys and consultations with NCS experts, stakeholders, and funders, the research team highlighted several recommendations on how to overcome key challenges faced by forestry-based NCS projects and boost their environmental and social performance.

These recommendations include encouraging funders to evaluate projects based on robust internal governance, support from regional and national governments, secure land tenure, material benefits for local communities, and full participation of community members from across a spectrum of socioeconomic groups; improving the credibility and verifiability of project emissions reductions and related co-benefits; and maintaining an open dialogue and shared costs and benefits among those who fund, implement, and benefit from these projects.

“Addressing climate change requires approaches that include emissions mitigation from economic activities paired with greenhouse gas reductions by natural ecosystems,” says Sergey Paltsev, a co-author of the study and deputy director of the MIT Joint Program. “Guided by these recommendations, we advocate for a proper scaling-up of NCS activities from project levels to help assure integrity of emissions reductions across entire countries.”

Source: How forests can cut carbon, restore ecosystems, and create jobs

A simpler method for learning to control a robot

- Posted in Uncategorized by

Researchers from MIT and Stanford University have devised a new machine-learning approach that could be used to control a robot, such as a drone or autonomous vehicle, more effectively and efficiently in dynamic environments where conditions can change rapidly.

This technique could help an autonomous vehicle learn to compensate for slippery road conditions to avoid going into a skid, allow a robotic free-flyer to tow different objects in space, or enable a drone to closely follow a downhill skier despite being buffeted by strong winds.

The researchers’ approach incorporates certain structure from control theory into the process for learning a model in such a way that leads to an effective method of controlling complex dynamics, such as those caused by impacts of wind on the trajectory of a flying vehicle. One way to think about this structure is as a hint that can help guide how to control a system.

“The focus of our work is to learn intrinsic structure in the dynamics of the system that can be leveraged to design more effective, stabilizing controllers,” says Navid Azizan, the Esther and Harold E. Edgerton Assistant Professor in the MIT Department of Mechanical Engineering and the Institute for Data, Systems, and Society (IDSS), and a member of the Laboratory for Information and Decision Systems (LIDS). “By jointly learning the system’s dynamics and these unique control-oriented structures from data, we’re able to naturally create controllers that function much more effectively in the real world.”

Using this structure in a learned model, the researchers’ technique immediately extracts an effective controller from the model, as opposed to other machine-learning methods that require a controller to be derived or learned separately with additional steps. With this structure, their approach is also able to learn an effective controller using fewer data than other approaches. This could help their learning-based control system achieve better performance faster in rapidly changing environments.

“This work tries to strike a balance between identifying structure in your system and just learning a model from data,” says lead author Spencer M. Richards, a graduate student at Stanford University. “Our approach is inspired by how roboticists use physics to derive simpler models for robots. Physical analysis of these models often yields a useful structure for the purposes of control — one that you might miss if you just tried to naively fit a model to data. Instead, we try to identify similarly useful structure from data that indicates how to implement your control logic.”

Additional authors of the paper are Jean-Jacques Slotine, professor of mechanical engineering and of brain and cognitive sciences at MIT, and Marco Pavone, associate professor of aeronautics and astronautics at Stanford. The research will be presented at the International Conference on Machine Learning (ICML).

Learning a controller

Determining the best way to control a robot to accomplish a given task can be a difficult problem, even when researchers know how to model everything about the system.

A controller is the logic that enables a drone to follow a desired trajectory, for example. This controller would tell the drone how to adjust its rotor forces to compensate for the effect of winds that can knock it off a stable path to reach its goal.

This drone is a dynamical system — a physical system that evolves over time. In this case, its position and velocity change as it flies through the environment. If such a system is simple enough, engineers can derive a controller by hand. 

Modeling a system by hand intrinsically captures a certain structure based on the physics of the system. For instance, if a robot were modeled manually using differential equations, these would capture the relationship between velocity, acceleration, and force. Acceleration is the rate of change in velocity over time, which is determined by the mass of and forces applied to the robot.

But often the system is too complex to be exactly modeled by hand. Aerodynamic effects, like the way swirling wind pushes a flying vehicle, are notoriously difficult to derive manually, Richards explains. Researchers would instead take measurements of the drone’s position, velocity, and rotor speeds over time, and use machine learning to fit a model of this dynamical system to the data. But these approaches typically don’t learn a control-based structure. This structure is useful in determining how to best set the rotor speeds to direct the motion of the drone over time.

Once they have modeled the dynamical system, many existing approaches also use data to learn a separate controller for the system.

“Other approaches that try to learn dynamics and a controller from data as separate entities are a bit detached philosophically from the way we normally do it for simpler systems. Our approach is more reminiscent of deriving models by hand from physics and linking that to control,” Richards says.

Identifying structure

The team from MIT and Stanford developed a technique that uses machine learning to learn the dynamics model, but in such a way that the model has some prescribed structure that is useful for controlling the system.

With this structure, they can extract a controller directly from the dynamics model, rather than using data to learn an entirely separate model for the controller.

“We found that beyond learning the dynamics, it’s also essential to learn the control-oriented structure that supports effective controller design. Our approach of learning state-dependent coefficient factorizations of the dynamics has outperformed the baselines in terms of data efficiency and tracking capability, proving to be successful in efficiently and effectively controlling the system’s trajectory,” Azizan says. 

When they tested this approach, their controller closely followed desired trajectories, outpacing all the baseline methods. The controller extracted from their learned model nearly matched the performance of a ground-truth controller, which is built using the exact dynamics of the system.

“By making simpler assumptions, we got something that actually worked better than other complicated baseline approaches,” Richards adds.

The researchers also found that their method was data-efficient, which means it achieved high performance even with few data. For instance, it could effectively model a highly dynamic rotor-driven vehicle using only 100 data points. Methods that used multiple learned components saw their performance drop much faster with smaller datasets.

This efficiency could make their technique especially useful in situations where a drone or robot needs to learn quickly in rapidly changing conditions.

Plus, their approach is general and could be applied to many types of dynamical systems, from robotic arms to free-flying spacecraft operating in low-gravity environments.

In the future, the researchers are interested in developing models that are more physically interpretable, and that would be able to identify very specific information about a dynamical system, Richards says. This could lead to better-performing controllers.

“Despite its ubiquity and importance, nonlinear feedback control remains an art, making it especially suitable for data-driven and learning-based methods. This paper makes a significant contribution to this area by proposing a method that jointly learns system dynamics, a controller, and control-oriented structure,” says Nikolai Matni, an assistant professor in the Department of Electrical and Systems Engineering at the University of Pennsylvania, who was not involved with this work. “What I found particularly exciting and compelling was the integration of these components into a joint learning algorithm, such that control-oriented structure acts as an inductive bias in the learning process. The result is a data-efficient learning process that outputs dynamic models that enjoy intrinsic structure that enables effective, stable, and robust control. While the technical contributions of the paper are excellent themselves, it is this conceptual contribution that I view as most exciting and significant.”

This research is supported, in part, by the NASA University Leadership Initiative and the Natural Sciences and Engineering Research Council of Canada.

Source: A simpler method for learning to control a robot

3 Questions: What’s it like winning the MIT $100K Entrepreneurship Competition?

- Posted in Uncategorized by

Solar power plays a major role in nearly every roadmap for global decarbonization. But solar panels are large, heavy, and expensive, which limits their deployment. But what if solar panels looked more like a yoga mat?

Such a technology could be transported in a roll, carried to the top of a building, and rolled out across the roof in a matter of minutes, slashing installation costs and dramatically expanding the places where rooftop solar makes sense.

That was the vision laid out by the MIT spinout Active Surfaces as part of the winning pitch at this year’s MIT $100K Entrepreneurship Competition, which took place May 15. The company is leveraging materials science and manufacturing innovations from labs across MIT to make ultra-thin, lightweight, and durable solar a reality.

The $100K is one of MIT’s most visible entrepreneurship competitions, and past winners say the prize money is only part of the benefit that winning brings to a burgeoning new company. MIT News sat down with Active Surface founders Shiv Bhakta, a graduate student in MIT’s Leaders for Global Operations dual-degree program within the MIT Sloan School of Management and Department of Civil and Environmental Engineering, and Richard Swartwout SM ’18 PhD ’21, an electrical engineering and computer science graduate and former Research Laboratory of Electronics postdoc and MIT.nano innovation fellow, to learn what the last couple of months have been like since they won.

Q: What is Active Surfaces’ solution, and what is its potential?

Bhakta: We’re commercializing an ultrathin film, flexible solar technology. Solar is one of the most broadly distributed resources in the world, but access is limited today. It’s heavy — it weighs 50 to 60 pounds a panel — it requires large teams to move around, and the form factor can only be deployed in specific environments.

Our approach is to develop a solar technology for the built environment. In a nutshell, we can create flexible solar panels that are as thin as paper, just as efficient as traditional panels, and at unprecedented cost floors, all while being applied to any surface. Same area, same power. That’s our motto.

When I came to MIT, my north star was to dive deeper in my climate journey and help make the world a better, greener place. Now, as we build Active Surfaces, I'm excited to see that dream taking shape. The prospect of transforming any surface into an energy source, thereby expanding solar accessibility globally, holds the promise of significantly reducing CO2 emissions at a gigaton scale. That’s what gets me out of bed in the morning.

Swartwout: Solar and a lot of other renewables tend to be pretty land-inefficient. Solar 1.0 is using low hanging fruit: cheap land next to easy interconnects and new buildings designed to handle the weight of current panels. But as we ramp up solar, those things will run out. We need to utilize spaces and assets better. That’s what I think solar 2.0 will be: urban PV deployments, solar that’s closer to demand, and integrated into the built environment. These next-generation use cases aren’t just a racking system in the middle of nowhere.

We’re going after commercial roofs, which would cover most [building] energy demand. Something like 80-90 percent of building electricity demands in the space can be met by rooftop solar.

The goal is to do the manufacturing in-house. We use roll-to-roll manufacturing, so we can buy tons of equipment off the shelf, but most roll-to-roll manufacturing is made for things like labeling and tape, and not a semiconductor, so our plan is to be the core of semiconductor roll-to-roll manufacturing. There’s never been roll-to-roll semiconductor manufacturing before.

Q: What have the last few months been like since you won the $100K competition?

Bhakta: After winning the $100K, we’ve gotten a lot of inbound contact from MIT alumni. I think that’s my favorite part about the MIT community — people stay connected. They’ve been congratulating us, asking to chat, looking to partner, deploy, and invest.

We’ve also gotten contacted by previous $100K competition winners and other startups that have spun out of MIT that are a year or two or three ahead of us in terms of development. There are a lot of startup scaling challenges that other startup founders are best equipped to answer, and it’s been huge to get guidance from them.

We’ve also gotten into top accelerators like Cleantech Open, Venture For Climatetech, and ACCEL at Greentown Labs. We also onboarded two rockstar MIT Sloan interns for the summer. Now we’re getting to the product-development phase, building relationships with potential pilot partners, and scaling up the area of our technology.      

Swartwout: Winning the $100K competition was a great point of validation for the company, because the judges themselves are well known in the venture capital community as well as people who have been in the startup ecosystem for a long time, so that has really propelled us forward. Ideally, we’ll be getting more MIT alumni to join us to fulfill this mission.

Q: What are your plans for the next year or so?

Swartwout: We’re planning on leveraging open-access facilities like those at MIT.nano and the University of Massachusetts Amherst. We’re pretty focused now on scaling size. Out of the lab, [the technology] is a 4-inch by 4-inch solar module, and the goal is to get up to something that’s relevant for the industry to offset electricity for building owners and generate electricity for the grid at a reasonable cost.

Bhakta: In the next year, through those open-access facilities, the goal is to go from 100-millimeter width to 300-millimeter width and a very long length using a roll-to-roll manufacturing process. That means getting through the engineering challenges of scaling technology and fine tuning the performance.

When we’re ready to deliver a pilotable product, it’s my job to have customers lined up ready to demonstrate this works on their buildings, sign longer term contracts to get early revenue, and have the support we need to demonstrate this at scale. That’s the goal.

Source: 3 Questions: What’s it like winning the MIT $100K Entrepreneurship Competition?

Standing Shoulder-to-Shoulder

- Posted in Uncategorized by

Parents across the country, with different lived experiences, are united by our belief that we can be the catalysts to create transformative change that benefits all children in our public education system. That change can only happen when we commit to truly embracing the power of parent participation, collaboration, and shared responsibility in creating a

Continue Reading

The post Standing Shoulder-to-Shoulder appeared first on ED.gov Blog.

Source: Standing Shoulder-to-Shoulder

Page 1 of 9

Category

Tags