Tuesday 30 May 2017

Sorghum: Health food sweetener and now, clothing dye


Date:
May 24, 2017
Source:
American Chemical Society
Summary:
Sorghum has long been a staple food in many parts of the world, but in the US, it's best known as a sweetener and livestock feed. As demand for the grain soars, so does the amount of waste husks. To reduce this waste, scientists report a new use for it: a wool dye that can add ultraviolet protection and fluorescence properties to clothing.
Share:
FULL STORY

Brown clothing dyes made from sorghum husks (top row) closely match the colors of synthetic dyes (bottom row).
Credit: American Chemical Society
 
Sorghum has long been a staple food in many parts of the world, but in the U.S., it's best known as a sweetener and livestock feed. As demand for the grain soars, so does the amount of waste husks. To reduce this waste, scientists report in the journal ACS Sustainable Chemistry & Engineering a new use for it: a wool dye that can add ultraviolet protection and fluorescence properties to clothing.
Sorghum, which looks like pearl couscous, is a hardy, drought-tolerant crop that is gaining popularity as a health food, livestock feed and source of bioethanol. Additionally, scientists are working on transforming the crop's waste for a range of applications, including food coloring and waste water purification. Building further on the colorant possibilities, Yiqi Yang, Xiuliang Hou and colleagues wanted to see if they could develop a practical clothing dye out of sorghum husks.

The researchers tested extracts of husks on wool materials, which turned varying shades of brown. The dyes showed good colorfastness even when the wool was washed, rubbed and ironed. They also added UV protection and fluorescence properties to the materials, which withstood 30 cycles of laundering.

Story Source:
Materials provided by American Chemical Society. Note: Content may be edited for style and length.

Helping plants pump iron

Date:

May 24, 2017
Source:
Salk Institute
Summary:
Genetic variants have been identified that help plants grow in low-iron environments, which could improve crop yields, say researchers.
Share:
FULL STORY

Seedlings (bottom) and roots (top) of Arabidopsis thaliana plants reveal that one variant of FRO2 gene (right) is better for growth in low-iron conditions than the other FRO2 variant (left).
Credit: Salk Institute
 
Just like people, plants need iron to grow and stay healthy. But some plants are better at getting this essential nutrient from the soil than others. Now, a study led by a researcher at the Salk Institute has found that variants of a single gene can largely determine a plant's ability to thrive in environments where iron is scarce.
The work, which appears in Nature Communications on May 24, 2017, could lead to improved crop yields for farmers and richer dietary sources of iron for animals and humans.

"Almost all life on Earth is based on plants -- animals eat plants and we eat animals or plants," says Wolfgang Busch, an associate professor in Salk's Plant Molecular and Cellular Biology Laboratory and senior author of the new paper. "It's very important for us to understand how plants solve the problem of getting iron because even though it's generally abundant on Earth, the form that plants can use is actually scarce."
The current work, led by Busch and including researchers from Austria's Gregor Mendel Institute of Molecular Plant Biology (where Busch was formerly based) focused on the well-studied weed Arabidopsis thaliana, a relative of cabbage and mustard. They obtained Arabidopsis seeds from strains that naturally occur all over Sweden, which has a variety of soils including some that are very low in iron. The team was particularly interested in strains that have adapted to low-iron soils and can grow a long root (a marker of health) even in those poor conditions.

The researchers grew the seeds in low-iron conditions, measuring their root growth along the way. They then employed a cutting-edge method called a Genome Wide Association Study (GWAS), which associates genes with a trait of interest -- in this case root length. A gene called FRO2 stood out as having a strong connection to root length. Different versions of the FRO2 gene ("variants") fell into two groups, those that were associated with a short root and those that were associated with a long root.
To find out whether variants of FRO2 were actually causing the difference (rather than merely being associated with it), the team grew seeds whose FRO2 gene had been deactivated. All plants in which the FRO2 gene had been deactivated now had stunted roots. The team then put either one variant or the other variant of the gene back in and again grew the plants in low-iron conditions. Variants for long roots grew better than variants for short roots. Together, the experiments showed that, indeed, genetic variants that confer higher activity of the FRO2 gene can largely be responsible for root growth and plant health in low-iron conditions. (Under normal conditions, FRO2 is not activated.)

"We thought by using a geographically restricted set of Arabidopsis thaliana strains, we could address local plant adaptations with respect to root growth under iron deficiency -- and we did," says Santosh Satbhai, a Salk research associate and first author of the paper. "We hope the agricultural community can benefit from this information."
The FRO2 gene is common to all plants, so boosting its expression in food crops or finding variants that thrive in poor soils could be important for increasing crop yields in the face of population growth and global warming's threats to arable land.
"At least two billion people worldwide currently suffer from iron malnutrition. Anything we can do to improve the iron content of plants will help a lot of people," adds Busch.

Story Source:
Materials provided by Salk Institute. Note: Content may be edited for style and length.

Secret weapon of smart bacteria tracked to 'sweet tooth'


May help cotton farmers battle blight

Date:
May 24, 2017
Source:
Texas A&M AgriLife Communications
Summary:
Researchers have figured out how a once-defeated bacterium has re-emerged to infect cotton in a battle that could sour much of the Texas and US crop. And it boils down to this: A smart bacteria with a sweet tooth.
Share:
FULL STORY

Texas A&M AgriLife Research's Dr. Libo Shan, left, and doctoral student Kevin Cox have figured out how a once-defeated bacterium has re-emerged to infect cotton in a battle that could sour much of the Texas and US crop. And it boils down to this: A smart bacteria with a sweet tooth.
Credit: Texas A&M AgriLife Research photo by Kathleen Phillips
 
Researchers have figured out how a once-defeated bacterium has re-emerged to infect cotton in a battle that could sour much of the Texas and U.S. crop.
And it boils down to this: A smart bacteria with a sweet tooth.
"It's a food fight between the bacterium and the cotton plant," said Dr. Libo Shan, Texas A&M AgriLife Research plant pathologist in College Station. "The bacterium tricks the host to produce food for itself. But once the bacterium is in the plant, it saves its own resources and switches the plant's transportation of sugar to itself. The host plant is deprived of sugar needed for energy, can't get rid of the bacteria and the disease progresses. This bacterium is very smart."

The discovery is in the May 24 edition of the journal Nature Communications.
The disease is bacterial blight caused by Xanthomonas citri subspecies malvacearum, otherwise known as Xcm. Decades ago, it wiped out thousands of cotton acres annually, showing up first as brown spots on leaves, stems and even bolls then spreading until a plant -- indeed entire fields of plants -- dropped leaves and stopped growing. It's equally devastating on rice and cassava, Shan said.
Scientists long ago identified Xcm as the culprit but didn't know how or why the bacterium went on its warpath. Meanwhile, plant breeders tested types of cotton that were less susceptible and developed varieties that were more resistant to the disease. That worked from the late 20th Century until about six years ago when Xcm bacterial blight again showed up in force on the cotton crop in Texas and other states, wasting even varieties that previously could ward it off.
AgriLife Research scientists in Lubbock began searching the fields and plants for answers to help farmers cope, while another team in College Station took a deep look at the bacterium and cotton interaction at the cellular level and discovered its covert operations.
"This bacteria causes disease in cotton by using a secret weapon, a sort of needle-like structure, to inject the protein effectors into cotton cells," Shan said. "One of these effectors mimics the host transcription factors, directly targeting and activating the host gene transcription of a plant sugar transporter. The plant then begins to pump energizing sugar from within the cell to the apoplast or pathway of the cell, thus feeding the bacteria."

The discovery was made in part by Kevin Cox, a Texas A&M doctoral student from St. Louis, Missouri, who has been working with Shan for nearly four years and is lead author on the paper.
"My part of it was basically to identify what gene was being activated by a particular effector from the bacteria in order to cause disease," he said. "When we found out what the target of that effector was, that's when we got excited. That was pretty cool."
Shan said the excitement reverberated among researchers for several reasons.
"First it will provide a mechanistic understanding of how the bacterium causes disease in cotton, and second it provides a potential strategy for control of this cotton disease and resistance against the bacteria," she said. "Third, it may provide potential tools to do earlier diagnostic for the presence of this disease before symptoms show in the field."
That's important, Shan said, because once a farmer sees the disease in the field, it's too late.
"It is very hard to control. Up to 40 percent of cotton yield can be affected," she added. "So though this is a fundamental discovery of this mechanism how the bacterium causes disease, it provides a lot of potential for field application."
Shan said isolates of the early strain and current strain of the Xcm bacterium -- and the sequencing of their genomes -- indicated there are polymorphisms genetically slightly different but acting the same in their attack on cotton.

"We compared the genetic differences in trying to pin down a reason for the disease's rampant reemergence in recent years," she said. "It is likely that rapid evolvement of new bacterial effectors contributed to the resurgence of this disease."
The team will continue to explore bacterial blight in cotton, as well as study the implications of these finds on rice and cassava.
"We want to implement the tools for cotton diagnostics by working with engineers to develop a very sensitive probe, perhaps like a doctor would use for diagnosing diabetes in humans," Shan said. "It would be useful to have a probe able to detect sugar content in the field for cotton, particularly in the early seedling stage."
She said the team may also explore "gene editing technologies" to take out the portion of the gene that allows the bacteria to take the plant's food, which would make the cotton more resistant.

Story Source:
Materials provided by Texas A&M AgriLife Communications. Original written by Kathleen Phillips. Note: Content may be edited for style and length.

Improving wheat yields by increasing grain size, weight


Date:
May 26, 2017
Source:
South Dakota State University
Summary:
Researchers aim to improve wheat yields by increasing grain size and weight using a precise gene-editing tool known as CRISPR/Cas9.
Share:
FULL STORY

South Dakota State University associate biology and microbiology professor Wanlong Li assesses the growth of two-week-old wheat seedlings. Through a new three-year, $930,000 U.S. Department of Agriculture grant, Li hopes to improve wheat yields by increasing the size and weight of the kernels. The project is part of the National Institute of Food and Agriculture’s International Wheat Yield Partnership Program.
Credit: Emily Weber
 
Larger, heavier wheat kernels -- that's how associate professor Wanlong Li of the SDSU Department of Biology and Microbiology seeks to increase wheat production. Through a three-year, $930,000 U.S. Department of Agriculture grant, Li is collaborating with Bing Yang, an associate professor in genetics, development and cell biology at Iowa State, to increase wheat grain size and weight using a precise gene-editing tool known as CRISPR/Cas9.
South Dakota State is one of seven universities nationwide to receive funding to develop new wheat varieties as part of the National Institute of Food and Agriculture's International Wheat Yield Partnership (IWYP) Program. The program supports the G20's Wheat Initiative, which seeks to enhance the genetics related to yield and develop varieties adapted to different regions and environmental conditions.
The goal of IWYP, which was formed in 2014, is to increase wheat yields by 50 percent in 20 years. Currently, the yearly yield gain is less than 1 percent, but to meet the WYYP goal wheat yields must increase 1.7 percent per year. "It's a quantum leap," he said. "We need a lot of work to reach this."
Humans consume more than 500 million tons of wheat per year, according to Li. However, United States wheat production is decreasing, because farmers can make more money growing other crops. He hopes that increasing the yield potential will make wheat more profitable.

First, the researchers will identify genes that control grain size and weight in bread wheat using the rice genome as a model.
The CRISPR editing tool allows the researchers to knockout each negatively regulating gene and thus study its function, according to Li. "CRISPR is both fast and precise," he added. "It can produce very accurate mutations."
This technique will be used to create 30 constructs that target 20 genes that negatively impact wheat grain size and weight. From these, the University of California Davis Plant Transformation Facility, through a service contract, will produce 150 first-generation transgenic plants and the SDSU researchers will then identify which ones yield larger seeds. One graduate student and a research assistant will work on the project.
"The end products are not genetically modified organisms," Li emphasized. "When we transfer one of the CRISPR genes to wheat, it's transgenic. That then produces a mutation in a different genomic region. When the plants are then self-pollinated or backcrossed, the transgene and the mutation are separated."
The researchers then screen the plants to select those that carry the desired mutations. "This is null transgenic," Li said, noting USDA has approved this process in other organisms. Yang used this technique to develop bacterial blight-resistant rice.
As part of the project, the researchers will also transfer the mutations into durum wheat. Ultimately, these yield-increasing mutations, along with the markers to identify the traits, can be transferred to spring and winter wheat.

Story Source:
Materials provided by South Dakota State University. Note: Content may be edited for style and length.

Tree-climbing goats disperse seeds by spitting


Date:
May 24, 2017
Source:
Ecological Society of America
Summary:
Ecologists have observed an unusual way in which treetop-grazing goats may be benefiting the trees: the goats spit out the trees' seeds.
Share:
FULL STORY

Goats graze on an argan tree. In the fruiting season, many clean argan nuts are spat out by the goats while chewing their cud.
Credit: © simonestorelli / Fotolia
 
In dry southern Morocco, domesticated goats climb to the precarious tippy tops of native argan trees to find fresh forage. Local herders occasionally prune the bushy, thorny trees for easier climbing and even help goat kids learn to climb. During the bare autumn season, goats spend three quarters of their foraging time "treetop grazing."
Spanish ecologists have observed an unusual way in which the goats may be benefiting the trees: the goats spit the trees' seeds. Miguel Delibes, Irene Castañeda, and José M Fedriani reported their discovery in the latest Natural History Note in the May issue of the Ecological Society of America's journal Frontiers in Ecology and the Environment. The paper is open access.
Argan may be familiar from popular beauty products that feature argan oil, made from the tree's nuts. The nut is surrounded by a pulpy fruit that looks a bit like a giant green olive. For goats, the fruits are a tasty treat worth climbing up to 30 feet into the branches to obtain.

But the goats don't like the large seeds. Like cows, sheep, and deer, goats re-chew their food after fermenting it for a while in a specialized stomach. While ruminating over their cud, the goats spit out the argan nuts, delivering clean seeds to new ground, wherever the goat has wandered. Gaining some distance from the parent tree gives the seedling a better chance of survival.
This novel seed dispersal effect is a variation on the mechanism ecologists call endozoochory, in which seeds more commonly pass all the way through the animal's digestive system and out the other end (or sometimes through two digestive systems). The authors suspected that reports of goats dispersing argan seeds by this more common mechanism were mistaken, because goats do not usually poop large seeds.
The researchers have witnessed sheep, captive red deer, and fallow deer spitting seeds while chewing their cud, and suspect this spitting variation on endozoochory may actually be common -- and perhaps an essential route of seed spread for some plant species.

Story Source:
Materials provided by Ecological Society of America. Note: Content may be edited for style and length.

American beekeepers lost 33 percent of bees in 2016-17


Annual losses improved over last year; winter losses lowest in survey history

Date:
May 25, 2017
Source:
University of Maryland
Summary:
Beekeepers across the United States lost 33 percent of their honey bee colonies during the year spanning April 2016 to April 2017, according to the latest preliminary results of an annual nationwide survey. Rates of both winter loss and summer loss -- and consequently, total annual losses -- improved compared with last year. Winter losses were the lowest recorded since the survey began in 2006-07.
Share:
FULL STORY

This summary chart shows the results of an 11-year annual survey that tracks honey bee colony losses in the United States, spanning 2006-2017.
Credit: University of Maryland/Bee Informed Partnership
 
Beekeepers across the United States lost 33 percent of their honey bee colonies during the year spanning April 2016 to April 2017, according to the latest preliminary results of an annual nationwide survey. Rates of both winter loss and summer loss -- and consequently, total annual losses -- improved compared with last year.
Total annual losses were the lowest since 2011-12, when the survey recorded less than 29 percent of colonies lost throughout the year. Winter losses were the lowest recorded since the survey began in 2006-07.

The survey, which asks both commercial and small-scale beekeepers to track the survival rates of their honey bee colonies, is conducted each year by the nonprofit Bee Informed Partnership in collaboration with the Apiary Inspectors of America. Survey results for this year and all previous years are publicly available on the Bee Informed website: https://beeinformed.org/results-categories/winter-loss/
"While it is encouraging that losses are lower than in the past, I would stop short of calling this 'good' news," said Dennis vanEngelsdorp, an assistant professor of entomology at the University of Maryland and project director for the Bee Informed Partnership. "Colony loss of more than 30 percent over the entire year is high. It's hard to imagine any other agricultural sector being able to stay in business with such consistently high losses."

Beekeepers who responded to the survey lost a total of 33.2 percent of their colonies over the course of the year. This marks a decrease of 7.3 percentage points over the previous study year (2015-16), when loss rates were found to be 40.5 percent. Winter loss rates decreased from 26.9 percent in the previous winter to 21.1 percent this past winter, while summer loss rates decreased from 23.6 percent to 18.1 percent.
The researchers noted that many factors are contributing to colony losses, with parasites and diseases at the top of the list. Poor nutrition and pesticide exposure are also taking a toll, especially among commercial beekeepers. These stressors are likely to synergize with each other to compound the problem, the researchers said.

"This is a complex problem," said Kelly Kulhanek, a graduate student in the UMD Department of Entomology who helped with the survey. "Lower losses are a great start, but it's important to remember that 33 percent is still much higher than beekeepers deem acceptable. There is still much work to do."
The number one culprit remains the varroa mite, a lethal parasite that can easily spread between colonies. Mite levels in colonies are of particular concern in late summer, when bees are rearing longer-lived winter bees.
In the fall months of 2016, mite levels across the country were noticeably lower in most beekeeping operations compared with past years, according to the researchers. This is likely due to increased vigilance on the part of beekeepers, a greater availability of mite control products and environmental conditions that favored the use of timely and effective mite control measures. For example, some mite control products contain essential oils that break down at high temperatures, but many parts of the country experienced relatively mild temperatures in the spring and early summer of 2016.

This is the 11th year of the winter loss survey, and the seventh year to include summer and annual losses. More than 4,900 beekeepers from all 50 states and the District of Columbia responded to this year's survey. All told, these beekeepers manage about 13 percent of the nation's estimated 2.78 million honey bee colonies.
The survey is part of a larger research effort to understand why honey bee colonies are in such poor health, and what can be done to manage the situation. Some crops, such as almonds, depend entirely on honey bees for pollination. Honey bees pollinate an estimated $15 billion worth of crops in the U.S. annually.
"Bees are good indicators of the health of the landscape as a whole," said Nathalie Steinhauer, a graduate student in the UMD Department of Entomology who leads the data collection efforts for the annual survey. "Honey bees are strongly affected by the quality of their environment, including flower diversity, contaminants and pests. To keep healthy bees, you need a good environment and you need your neighbors to keep healthy bees. Honey bee health is a community matter."

Story Source:
Materials provided by University of Maryland. Note: Content may be edited for style and length.

Monday 29 May 2017

Zika reached Miami at least four times, Caribbean travel likely responsible


Researchers analyze Zika genome to understand future pandemic prevention

Date:
May 24, 2017
Source:
Scripps Research Institute
Summary:
With mosquito season looming in the Northern Hemisphere, doctors and researchers are poised to take on a new round of Zika virus infections. Now a new study explains how Zika virus entered the United States via Florida in 2016 -- and how it might re-enter the country this year.
Share:
FULL STORY

TSRI Research Associate Nathan D. Grubaugh works with TSRI Graduate Student Karthik Gangavarapu to map the spread of Zika virus.
Credit: Photo by Faith Hark
 
With mosquito season looming in the Northern Hemisphere, doctors and researchers are poised to take on a new round of Zika virus infections.
Now a new study by a large group of international researchers led by scientists at The Scripps Research Institute (TSRI) explains how Zika virus entered the United States via Florida in 2016 -- and how it might re-enter the country this year.
By sequencing the virus's genome at different points in the outbreak, the researchers created a family tree showing where cases originated and how quickly they spread. They discovered that transmission of Zika virus began in Florida at least four -- and potentially up to forty -- times last year. The researchers also traced most of the Zika lineages back to strains of the virus in the Caribbean.
"Without these genomes, we wouldn't be able to reconstruct the history of how the virus moved around," said TSRI infectious disease researcher and senior author of the study, Kristian G. Andersen, who also serves as director of infectious disease genomics at the Scripps Translational Science Institute (STSI). "Rapid viral genome sequencing during ongoing outbreaks is a new development that has only been made possible over the last couple of years."
The research was published May 24, 2017, in the journal Nature. This was one of three related studies, published simultaneously in Nature journals, exploring the transmission and evolution of Zika virus. A fourth study was also published in Nature Protocols providing details of the technologies used by the researchers.
Why Miami?
By sequencing Zika virus genomes from humans and mosquitoes -- and analyzing travel and mosquito abundance data -- the researchers found that several factors created what TSRI Research Associate Nathan D. Grubaugh called a "perfect storm" for the spread of Zika virus in Miami.
"This study shows why Miami is special," said Grubaugh, the lead author of the study.
First, Grubaugh explained, Miami is home to year-round populations of Aedes aegypti mosquitoes, the main species that transmits Zika virus. The area is also a significant travel hub, bringing in more international air and sea traffic than any other city in the continental United States in 2016. Finally, Miami is an especially popular destination for travelers who have visited Zika-afflicted areas.
The researchers found that travel from the Caribbean Islands may have significantly contributed to cases of Zika reaching the city. Of the 5.7 million international travelers entering Miami by flights and cruise ships between January and June of 2016, more than half arrived from the Caribbean.

Killing Mosquitoes Shows Results
The researchers believe Zika virus may have started transmission in Miami up to 40 times, but most travel-related cases did not lead to any secondary infections locally. The virus was more likely to reach a dead end than keep spreading.
The researchers found that one reason for the dead-ends was a direct connection between mosquito control efforts and disease prevention. "We show that if you decrease the mosquito population in an area, the number of Zika infections goes down proportionally," said Andersen. "This means we can significantly limit the risk of Zika virus by focusing on mosquito control. This is not too surprising, but it's important to show that there is an almost perfect correlation between the number of mosquitoes and the number of human infections."
Based on data from the outbreak, Andersen sees potential in stopping the virus through mosquito control efforts in both Florida and other infected countries, instead of, for example, through travel restrictions. "Given how many times the introductions happened, trying to restrict traffic or movement of people obviously isn't a solution. Focusing on disease prevention and mosquito control in endemic areas is likely to be a much more successful strategy," he said.
When the virus did spread, the researchers found that splitting Miami into designated Zika zones -- often done by neighborhood or city block -- didn't accurately represent how the virus was moving. Within each Zika zone, the researchers discovered a mixing of multiple Zika lineages, suggesting the virus wasn't well-confined, likely moving around with infected people.
Andersen and Grubaugh hope these lessons from the 2016 epidemic will help scientists and health officials respond even faster to prevent Zika's spread in 2017.

Behind the Data
Understanding Zika's timeline required a large international team of scientists and partnerships with several health agencies. In fact, the study was a collaboration of more than 60 researchers from nearly 20 institutions, including study co-leaders at the U.S. Army Medical Research Institute of Infectious Diseases, Florida Gulf Coast University, the University of Oxford, the Fred Hutchinson Cancer Research Center, the Florida Department of Health and the Broad Institute of MIT and Harvard.
The scientists also designed a new method of genomic sequencing just to study the virus. Because Zika virus is hard to collect in the blood of those infected, it was a challenge for the researchers to isolate enough of its genetic material for sequencing. To solve this problem, the team, together with Joshua Quick and Nick Loman at the University of Birmingham in the UK, developed two different protocols to break apart the genetic material they could find and reassemble it in a useful way for analysis.

With these new protocols, the researchers sequenced the virus from 28 of the reported 256 Zika cases in Florida, as well as seven mosquito pools, to model what happened in the larger patient group. As they worked, the scientists released their data immediately publicly to help other scientists. They hope to release more data -- and analysis -- in real time as cases mount in 2017.
The new study was published with three companion papers, also in Nature journals, that explore Zika's spread in other parts of the Americas.

Story Source:
Materials provided by Scripps Research Institute. Note: Content may be edited for style and length.

Lizards may be overwhelmed by fire ants and social stress combined


Date:
May 23, 2017
Source:
Penn State
Summary:
Lizards living in fire-ant-invaded areas are stressed. However, a team of biologists found that the lizards did not exhibit this stress as expected after extended fire ant exposure in socially stressful environments, leading to questions about stress overload.

FULL STORY

The researchers conducted two experiments to test fence lizards' stress response to non-lethal exposure to fire ants.
Credit: Tracy Langkilde and Travis Robbins
 
Lizards living in fire-ant-invaded areas are stressed. However, a team of biologists found that the lizards did not exhibit this stress as expected after extended fire ant exposure in socially stressful environments, leading to questions about stress overload. "After encounters with non-lethal stress levels (from fire-ant exposure), we asked; Okay, they (the lizards) live, but what happens then?" said Tracy Langkilde, professor of biology, Penn State. "Do they live and are fine? Do they live and remain stressed? We just don't know."

Langkilde and her colleagues wanted to know the short- and long-term physiological effects on fence lizards due to invading fire ants. Langkilde's prior research showed that lizards in fire-ant-invaded areas have elevated stress levels in their natural habitats, but did not pinpoint the cause of that stress.
"We can't look at lizards in the field," said Langkilde, "because the environments aren't controlled and so we can't assume that the stress response is just due to the fire ants. There are other environmental factors that could be causing differences in stress." The researchers conducted two experiments to test fence lizards' stress response to non-lethal exposure to fire ants. The first experiment looked at the impact on stress physiology immediately after a short, fire-ant exposure and the second looked at the physiological effects after extended exposure in semi-natural conditions.

The researchers first exposed lizards to natural fire-ant mounds in short, direct encounters, to show that fire ants really do cause stress to the lizards. In invaded areas, fence lizards will die after sixty seconds of bites from a few dozen fire ants. The first set of experiments exposed lizards once to just a few fire ants for sixty seconds, ensuring a non-lethal dose of fire-ant venom. The researchers measured the lizards' stress levels after the encounter with fire ants by the amount of the stress hormone corticosterone (CORT) in blood samples. Although they found that fire ants did cause stress in these staged encounters, this did not prove that fire ants were the cause of elevated stress seen in nature.

"The direct experiment shows that fire ants are stressors," Langkilde said, "but it does not rule out other environmental factors that may be contributing to differences in stress in nature. So, we wanted to control the environments as much as possible to isolate fire ants as the causal factor of elevated stress levels."
A second experiment tested the lizards' stress responses after two weeks spent in large, semi-natural enclosures built in a fire ant-invaded area. Two of four enclosures had natural levels of fire ants while the other two were made fire-ant free. The researchers made the four enclosures as identical as possible -- for example, the number of trees; places to hide or perch; the quantity of natural prey; and gender balance of lizards -- so that the only controllable difference was the presence or lack of fire ants in the enclosures. They expected to measure significantly higher CORT levels in the blood samples from lizards that had been living with fire ants for two weeks.

Instead, they found the exact opposite. The lizards from fire-ant enclosures showed significantly lower baseline CORT levels than those from fire-ant-free enclosures.
"This could mean that these lizards were not stressed out by fire ants -- which is not likely," Langkilde said. "But it could also mean that something has gone wrong with their stress-control system."
The researchers speculated that the combination of fire ant exposure and social stress -- moving to a new environment, interacting with new lizards, and being watched by researchers -- may have overwhelmed the lizards' ability to manage stress, like blowing a fuse in a power surge.
"If you have too much of the stress hormone," Langkilde says, "it actually shuts down production of any more stress hormone." Langkilde and her colleagues tested for this type of breakdown -- called allostatic overload -- by measuring the lizards' immune system response and chemically stimulating further CORT production after the two-week experiment. Past laboratory tests showed that allostatic overload should suppress stress hormone production and immune system response.
However, neither the exposed nor the non-exposed lizards showed traditional signs of allostatic overload, and this has left the researchers with no clear reason why lizards had lower CORT levels after long-term fire ant exposure.

The lizards may have adapted to their new environments given more time, Langkilde said, or our understanding of allostatic overload could be flawed. Our expectations come from artificially induced overloads tested in a lab, and Langkilde argues that we need to test how lizards respond in more natural environments.
"Environments are becoming increasingly disturbed by human developments, highways and incursions by other species," said Langkilde. "We need to know the effect of the invader, but we also need to know the impact of other environmental stressors. Right now, we're not very prepared to predict the long-term impact of stressors."

Story Source:
Materials provided by Penn State. Note: Content may be edited for style and length.

Amazingly flexible: Learning to read in your 30s profoundly transforms the brain


Date:
May 24, 2017
Source:
Max Planck Institute for Psycholinguistics
Summary:
Reading is such a modern cultural invention that there is no specific area in the brain dedicated to it. Scientists have found that learning to read as an adult reconfigures evolutionarily ancient brain structures hitherto assigned to different skills. These findings were obtained in a large-scale study in India in which completely illiterate women learned how to read and write for six months.

FULL STORY

When becoming literate neuroplasticity conquers a network that is deeply rooted in the brain. This reorganisation makes us more and more efficient in visually navigating through letter strings.
Credit: Max Planck Institute for Human Cognitive Brain Sciences
Reading is such a new ability in human evolutionary history that the existence of a 'reading area' could not be specified in our genes. A kind of recycling process has to take place in the brain while learning to read: Areas evolved for the recognition of complex objects, such as faces, become engaged in translating letters into language. Some regions of our visual system thereby turn into interfaces between the visual and language systems.

"Until now it was assumed that these changes are limited to the outer layer of the brain, the cortex, which is known to adapt quickly to new challenges," says project leader Falk Huettig from the Max Planck Institute for Psycholinguistics. The Max Planck researchers together with Indian scientists from the Centre of Bio-Medical Research (CBMR) Lucknow and the University of Hyderabad have now discovered what changes occur in the adult brain when completely illiterate people learn to read and write. In contrast to previous assumptions, the learning process leads to a reorganisation that extends to deep brain structures in the thalamus and the brainstem. The relatively young phenomenon of human writing therefore changes brain regions that are very old in evolutionary terms and already core parts of mice and other mammalian brains.
"We observed that the so-called colliculi superiores, a part of the brainstem, and the pulvinar, located in the thalamus, adapt the timing of their activity patterns to those of the visual cortex," says Michael Skeide, scientific researcher at the Max Planck Institute for Human Cognitive and Brain Sciences (MPI CBS) in Leipzig and first author of the study, which has just been published in the magazine Science Advances. "These deep structures in the thalamus and brainstem help our visual cortex to filter important information from the flood of visual input even before we consciously perceive it." Interestingly, it seems that the more the signal timings between the two brain regions are aligned, the better the reading capabilities. "We, therefore, believe that these brain systems increasingly fine-tune their communication as learners become more and more proficient in reading," the neuroscientist explains further. "This could explain why experienced readers navigate more efficiently through a text."

Large-scale study with illiterates in India
The interdisciplinary research team obtained these findings in India, a country with an illiteracy rate of about 39 percent. Poverty still limits access to education in some parts of India especially for women. Therefore, in this study nearly all participants were women in their thirties. At the beginning of the training, the majority of them could not decipher a single written word of their mother tongue Hindi. Hindi, one of the official languages of India, is based on Devanagari, a scripture with complex characters describing whole syllables or words rather than single letters.
Participants reached a level comparable to a first-grader after only six months of reading training. "This growth of knowledge is remarkable," says project leader Huettig. "While it is quite difficult for us to learn a new language, it appears to be much easier for us to learn to read. The adult brain proves to be astonishingly flexible." In principle, this study could also have taken place in Europe. Yet illiteracy is regarded as such a taboo in the West that it would have been immensely difficult to find volunteers to take part. Nevertheless, even in India where the ability to read and write is strongly connected to social class, the project was a tremendous challenge. The scientists recruited volunteers from the same social class in two villages in Northern India to make sure that social factors could not influence the findings. Brain scans were performed in the city of Lucknow, a three hours taxi ride away from participants' homes.

A new view on dyslexia
The impressive learning achievements of the volunteers do not only provide hope for adult illiterates, they also shed new light on the possible cause of reading disorders such as dyslexia. One possible cause for the basic deficits observed in people with dyslexia has previously been attributed to dysfunctions of the thalamus. "Since we found out that only a few months of reading training can modify the thalamus fundamentally, we have to scrutinise this hypothesis," neuroscientist Skeide explains. It could also be that affected people show different brain activity in the thalamus just because their visual system is less well trained than that of experienced readers. This means that these abnormalities can only be considered an innate cause of dyslexia if they show up prior to schooling. "That's why only studies that assess children before they start to learn to read and follow them up for several years can bring clarity about the origins of reading disorders," Huettig adds.

Story Source:
Materials provided by Max Planck Institute for Psycholinguistics. Note: Content may be edited for style and length.

How hand amputation, reattachment, affects brain: First-of-its-kind study

Date:

May 24, 2017
Source:
University of Missouri-Columbia
Summary:
Researchers have found evidence of specific neurochemical changes associated with lower neuronal health in these brain regions. Further, they report that some of these changes in the brain may persist in individuals who receive hand transplants, despite their recovered hand function.

FULL STORY

What happens to the brain in the case of amputation of a hand? What happens with reattachment?
Credit: © Sergey Nivens / Fotolia
 
When a person loses a hand to amputation, nerves that control sensation and movement are severed, causing dramatic changes in areas of the brain that controlled these functions. As a result, areas of the brain devoted to the missing hand take on other functions. Now, researchers from the University of Missouri have found evidence of specific neurochemical changes associated with lower neuronal health in these brain regions. Further, they report that some of these changes in the brain may persist in individuals who receive hand transplants, despite their recovered hand function.

"When there is a sudden increase or decrease in stimulation that the brain receives, the function and structure of the brain begins to change," said Carmen M. Cirstea, M.D., Ph.D., research assistant professor of Physical Medicine and Rehabilitation and lead author of the study. "Using a noninvasive approach known as magnetic resonance spectroscopy (MRS) to examine areas of the brain previously involved with hand function, we observed the types of changes taking place at the neurochemical level after amputation, transplantation or reattachment."

Cirstea, with co-author Scott Frey, Ph.D., the Miller Family Chair in Cognitive Neuroscience in the Departments of Psychological Sciences and Neurology, used MRS to evaluate the neuronal health and function of nerve cells of current hand amputees, former amputees and healthy subjects.
The researchers instructed volunteers to flex their fingers to activate sensorimotor areas in both sides of the brain. The research team then analyzed N-acetylaspartate (NAA) levels, a chemical associated with neuronal health. The researchers found that NAA values for the reattachment and transplant patients were similar to levels of amputees and significantly lower than the healthy control group.

"Previous research has found substantial reorganizational changes in the brain following limb injuries that decrease sensory and motor stimulation following limb injuries," Frey said. "These findings show that after surgical repairs, the effects of nerve injuries on the mature brain may continue even as former amputees recover varying degrees of sensory and motor functions in replanted or transplanted hands."
Due to the small number of reattachment and transplant patients studied (5), the researchers said that the results should be interpreted with caution until more work is completed.

Story Source:
Materials provided by University of Missouri-Columbia. Original written by Maria Platz. Note: Content may be edited for style and length.

Tuesday 23 May 2017

Olive oil makes you feel full


Date:
March 14, 2013
Source:
Technische Universitaet Muenchen
Summary:
Reduced-fat food products are gaining in popularity. But whether these products are effective or not is a matter of dispute: While it is true that they contain fewer calories, people tend to overcompensate by eating more. Now a study has shown how oils and fats regulate the sensation of feeling full after eating, with olive oil leading the way. So what makes this oil so effective?
Share:
FULL STORY

Olive oil makes you feel full.
Credit: © Angel Simon / Fotolia
Reduced-fat food products are gaining in popularity. More and more people are choosing "light" products in an attempt to lose weight, or at least in the hope that they will not gain any pounds. But whether these products are effective or not is a matter of dispute: While it is true that they contain fewer calories, people tend to overcompensate by eating more if they do not feel full. Now a study has shown how "natural" oils and fats regulate the sensation of feeling full after eating, with olive oil leading the way. So what makes this oil so effective?
Work groups at Technische Universität München (TUM) under Prof. Peter Schieberle and at the University of Vienna under Prof. Veronika Somoza studied four different edible fats and oils: Lard, butterfat, rapeseed oil and olive oil. Over a period of three months, the study participants ate 500 grams of low-fat yoghurt enriched with one of the four fats or oils every day -- as a supplement to their normal diet.
"Olive oil had the biggest satiety effect," reports Prof. Peter Schieberle, Head of the TUM Chair of Food Chemistry and Director of the German Research Center for Food Chemistry. "The olive oil group showed a higher concentration of the satiety hormone serotonin in their blood. Subjectively speaking, these participants also reported that they found the olive oil yoghurt very filling." During the study period, no member of this group recorded an increase in their body fat percentage or their weight.

Aroma is the key
"The findings surprised us," admits Schieberle, "because rapeseed oil and olive oil contain similar fatty acids." The researchers decided to turn their attention to a completely different type of substance -- the aroma compounds in olive oil. In the second part of the study, one group was given yoghurt with olive oil aroma extracts and a control group was given plain yoghurt.
The results were conclusive: The olive oil group's calorie intake remained the same, but the control group had been consuming an extra 176 kilocalories per day. Schieberle explains: "The aroma group adapted their eating habits -- but the control group participants were obviously not able to do likewise. We also found that in comparison to the other group, the control group had less of the satiety hormone serotonin in their blood."

Direct impact on blood sugar level
How long the feeling of satiety lasts after eating depends on a number of factors, but blood sugar level is particularly significant. The faster it falls, that is to say, the faster the somatic cells absorb glucose from the blood, the sooner the person will start to feel hungry again. In the next part of their study, the researchers investigated which of the aroma substances present in the oil are most effective at inhibiting glucose absorption.
The researchers used olive oils from Spain, Greece, Italy and Australia for their study. The research team managed to identify two substances that reduce the absorption of glucose from the blood in liver cells: Hexanal and E2-Hexenal. They also discovered that Italian olive oil contained larger amounts of the two aroma compounds.
"Our findings show that aroma is capable of regulating satiety," concludes Schieberle. "We hope that this work will pave the way for the development of more effective reduced-fat food products that are nonetheless satiating."

Publication: P. Schieberle, V. Somoza, M. Rubach, L. Scholl, M. Balzer; Identifying substances that regulate satiety in oils and fats and improving low-fat foodstuffs by adding lipid compounds with a high satiety effect; Key findings of the DFG/AiF cluster project "Perception of fat content and regulating satiety: an approach to developing low-fat foodstuffs," 2009-2012.

Story Source:
Materials provided by Technische Universitaet Muenchen. Note: Content may be edited for style and length.

Oven-baked fish fingers have fewer furans than when fried


Date:
July 26, 2013
Source:
Plataforma SINC
Summary:
Researchers have discovered that fried fish fingers generate more furanic compounds than those baked in the oven. To be precise, there are three times as many when fried with olive oil and twice as many with sunflower oil. These compounds improve the food's organoleptic characteristics, but are believed to be toxic and carcinogenic.
Share:
FULL STORY

Oven-baked fish fingers.
Credit: SINC
Spanish and Portuguese researchers have discovered that fried fish fingers generate more furanic compounds than those baked in the oven. To be precise, there are three times as many when fried with olive oil and twice as many with sunflower oil. These compounds improve the food's organoleptic characteristics, but are believed to be toxic and carcinogenic.

Worries concerning the presence of furans in food have risen in recent years due to their toxic and carcinogenic effects, as observed in animals. In fact the International Agency for Research on Cancer, part of the WHO, has now listed them as possible carcinogens for humans.
Now, a team of researchers from the University of Porto (Portugal) and the University of Extremadura (Spain) has evaluated the effects that the cooking conditions of fish fingers can have on the quantity of furans (furan, 2-furfural, furfuryl alcohol, 2-pentylfuran and 5-hydroxymethylfurfural).
The results, published in the journal 'Food and Chemical Toxicology', reveal that fish fingers fried in olive oil contain approximately 30 micrograms of furans per gram (µg/g) and around 20 µg/g when sunflower oil is used..

When they are oven-baked, on the other hand, they generate fewer of these harmful substances: 10 µg/g. Furthermore, if fried fish fingers are reheated in the microwave, concentrations of 8.15 µg/g are found.
"The number of furans is lower when the temperature is lower and frying time is shorter, and also decreases when a longer time elapses after cooking," explains María Trinidad Pérez-Palacios, one of the authors.
Adjust cooking times and conditions
"Therefore," she adds, "formation of furanic compounds can be reduced by adjusting the conditions of cooking and post-cooking, for example by using the oven instead of the deep fryer, lowering the frying time and temperature -- 4 minutes at 160 ºC is sufficient -- or leaving a suitable amount of time (10 minutes) between cooking the product and eating it."
The researchers found that by following these recommendations the formation of furans can be reduced, although the volatile compounds associated with the aroma and flavour of the cooked products decrease along with them.

"Furans enhance the organoleptic characteristics of food, but as there is scientific evidence of their potential toxicity and carcinogenicity, new research is channelled towards reducing the formation of these compounds without impairing our sensory enjoyment of what we are eating," remarks Pérez-Palacios.
There is currently no legislation on the maximum permitted levels of furans in food. The European Food Safety Authority (EFSA) is working on this issue and recommends analysing these substances in heated products, such as cooked foods or drinks such as coffee, both when purchasing and when cooked for consumption.

Pérez-Palacios believes that it is also important to educate consumers to read the labels on ready-to-cook food products, which often recommend oven baking as the method of preparation, "which is a positive thing in terms of the results we have found.
"As such, manufacturers should also make progress, for example by putting information on packaging relating to the possibility of oven-cooking the product or even recommending it as the sole method of preparation," the researcher concludes.

Story Source:
Materials provided by Plataforma SINC. Note: Content may be edited for style and length.

Olive oil more stable and healthful than seed oils for frying food


Date:
October 22, 2014
Source:
American Chemical Society
Summary:
Frying is one of the world's most popular ways to prepare food -- think fried chicken and french fries. Even candy bars and whole turkeys have joined the list. But before dunking your favorite food in a vat of just any old oil, consider using olive. Scientists report that olive oil withstands the heat of the fryer or pan better than several seed oils to yield more healthful food.
Share:
FULL STORY

Frying is one of the world's most popular ways to prepare food -- think fried chicken and french fries. Even candy bars and whole turkeys have joined the list. But before dunking your favorite food in a vat of just any old oil, consider using olive. Scientists report in ACS' Journal of Agricultural and Food Chemistry that olive oil withstands the heat of the fryer or pan better than several seed oils to yield more healthful food.
Mohamed Bouaziz and colleagues note that different oils have a range of physical, chemical and nutritional properties that can degrade oil quality when heated. Some of these changes can lead to the formation of new compounds that are potentially toxic. By-products of heating oil can also lower the nutritional value of the food being fried. Bouaziz's team wanted to find out which cooking oil can maintain its quality under high heat and repeated use.

The researchers deep- and pan-fried raw potato pieces in four different refined oils -- olive, corn, soybean and sunflower -- and reused the oil 10 times. They found that olive oil was the most stable oil for deep-frying at 320 and 374 degrees Fahrenheit, while sunflower oil degraded the fastest when pan-fried at 356 degrees. They conclude that for frying foods, olive oil maintains quality and nutrition better than seed oils.
The authors acknowledge funding from the Ministère de l'Enseignement Supérieur et de la Recherche Scientifique and the Ministère de l'Agriculture, Tunisia.

Story Source:
Materials provided by American Chemical Society. Note: Content may be edited for style and length.

Fluorescing food dyes as probes to improve food quality


Date:
February 11, 2015
Source:
Biophysical Society
Summary:
Food dyes can give cakes, candy and sodas brilliant colors of the rainbow. Now a team of food scientists has found that food coloring may be able to play more than its traditional esthetic role in food presentation.
Share:
FULL STORY

Food dyes like these give food bright colors, and could also potentially act as embedded sensors for food consistency in products such as yogurt.
Credit: Maria Corradini/Rutgers
 
Food dyes can give cakes, candy and sodas brilliant colors of the rainbow. Now a team of food scientists at Rutgers University in New Jersey has found that food coloring may be able to play more than its traditional esthetic role in food presentation.
The researchers are investigating whether common food dyes, some with colorful name like Sunset Yellow and Brilliant Blue, could act as optical probes of the quality of edible goods. Their initial results show that the fluorescence of five common food colors increases as the viscosity of the surrounding fluid increases -- meaning the dyes could potentially act as embedded sensors for food's physical consistency in products such as yogurt or strawberry milk.

Testing the consistency of foods is important because consumers expect products to look and taste the same each time they buy them, and changes in physical consistency could also be indicators of bigger problems like spoilage, the researchers say. The researchers will present their results at the 59th annual meeting of the Biophysical Society, held Feb. 7 -- 11 in Baltimore, Md.
Measuring light emission from fluorescent particles is a standard scientific technique to probe a material's properties without destroying it. But could the technique work on food? Many fluorescent dyes are toxic or expensive, making them unfit for human consumption and ruling them out for use as food quality probes.
The Rutgers researchers wondered if the edible colors already added to many food products could act as fluorescent probes.

"Fluorescent probes have been used in many applications, but the idea of using food colors for this purpose is new," said Sarah Waxman, an undergraduate student who is working on a research project to study the fluorescent properties of food dyes in the lab of Rutgers food scientist Richard Ludescher.
The research team tested the fluorescent properties of five edible food colors that are routinely added to food or pharmaceuticals: Allura Red, Sunset Yellow, Brilliant Blue, Fast Green and Tartrazine, a yellow-colored dye.
The team mixed the dyes in solutions of varying consistencies. Some solutions were made with pure water while others included components such as sugar or glycerol, a viscous liquid often used in pharmaceuticals. The researchers changed the thickness of the solutions by altering their temperature and composition, and then measured the fluorescent characteristics of the dyes under the varying conditions.
They found two main results that suggest food colors could potentially work as food quality probes. Firstly, they found that all five dyes fluoresce at a significantly different color than the light that is used to excite them or the fluorescence of other components in the environment, meaning the emitted signal could be easily distinguished from the background. Secondly, they found that although the food colors emitted practically no light when mixed in pure water, the light intensity increased as the solutions thickened.
The increased fluorescence could be due to the way molecules move differently in different liquids, the researchers explained. In pure water, dye molecules are free to twist, but when the motion of the molecules is constrained in a thicker liquid, energy can't escape through as many mechanical pathways -- meaning more energy is re-emitted in the form of fluorescent light. The change in the dyes' fluorescence could therefore give clues about the consistency and molecular arrangement of the fluid surrounding the dye particles, Waxman said.

"A viscometer, which is a typical instrument to test the thickness of food, requires separating and ultimately discarding a large sample size and could report distorted numbers due to factors like the slippage of layers in the fluid," Waxman said. "Using food dyes, which are already present in many food products, as probes could be a less invasive and more accurate way to test food's physical properties."

Story Source:
Materials provided by Biophysical Society. Note: Content may be edited for style and length.

The glow of food dye can be used to monitor food quality


Date:
February 15, 2017
Source:
Biophysical Society
Summary:
Allura Red, a synthetic food and pharmaceutical color widely used within the U.S., boasts special properties that may make it and other food dyes appropriate as sensors or edible probes to monitor foods and pharmaceuticals. A team of researchers recently made this discovery during an extension of their work identifying and characterizing molecules in foods or food ingredients that might provide signals of food quality, stability or safety.
Share:
FULL STORY

Alexia Ciarfella, a junior working with Richard Ludescher in his lab, pipetting an Allura Red stock solution.
Credit: Jeff Heckman/Rutgers University
 
Allura Red, a synthetic food and pharmaceutical color widely used within the U.S., boasts special properties that may make it and other food dyes appropriate as sensors or edible probes to monitor foods and pharmaceuticals.
A team of researchers -- from Rutgers University, the University of Pennsylvania and the University of Massachusetts -- recently made this discovery during an extension of their work identifying and characterizing molecules in foods or food ingredients that might provide signals of food quality, stability or safety.

It turns out that many molecules found in foods absorb ultraviolet or visible light and subsequently emit light as fluorescence. Because fluorescence is sensitive to the local chemical and physical environment, this emitted light can "report" on the local properties of the food, the pH, polarity, or in the case of Allura Red, local viscosity or thickness.
During the Biophysical Society's 61st Annual Meeting, being held Feb. 11-15, 2017, in New Orleans, Louisiana, Richard Ludescher, dean of Academic Programs and professor of food science in the School of Environmental and Biological Sciences at Rutgers, will present the group's work exploring the fluorescent properties of food dyes.

One food dye in particular, Sunset Yellow, "only exhibits phosphorescence in viscous solution, so we wanted to examine others that tend to be nonfluorescent to see if they might fluoresce in viscous solutions," Ludescher explained. All of the dyes they tested -- Tartrazine, Fast Green, Allura Red and others -- showed properties that are sensitive to changes in viscosity.
The researchers can "draw correlations between fluorescence intensity of, say, Allura Red, which shows that its intensity varies more than 10x upon changing viscosity from water to glycerol," Ludescher said.
The significance of the group's work is that it highlights the potential of harnessing molecules that are already inside the foods we eat to monitor their basic physical and chemical properties. "It could also be used during the manufacturing process to monitor and determine whether products have the right physical properties," Ludescher said.

With optical sensing, such analysis could be achieved within mere seconds during manufacture -- automatically and noninvasively replacing a measurement that previously might have required tens of minutes.
Interestingly, the team identified other naturally occurring molecules. "Many naturally occurring molecules are sensitive to other physical and chemical properties important for food quality, so a generalized technique using naturally occurring food molecules -- colors, flavors, vitamins, etc. -- to monitor food quality is, in principle, possible," Ludescher noted.

Edible optical probes, for example, would be intriguing for monitoring food quality. "It might be possible to monitor quality in products not only during manufacture but also during distribution, storage, or even during point of sale in the market," he pointed out. "Imagine employees at the local supermarket monitoring the product quality of foods on the shelf by simply scanning the actual product through its packaging with a handheld spectrometer."
What's the next step for this work? "Characterizing the optical properties of as many naturally occurring molecules as possible to build a library of potential intrinsic luminescent sensors and edible probes to monitor quality in foods and pharmaceuticals," said Ludescher.

Story Source:
Materials provided by Biophysical Society. Note: Content may be edited for style and length.

Blue and purple corn: Not just for tortilla chips anymore


Date:
May 17, 2017
Source:
University of Illinois College of Agricultural, Consumer and Environmental Sciences
Summary:
Consumers today insist on all-natural everything, and food dyes are no exception. Even if food manufacturers are willing to make the change, current sources of natural dyes are expensive and hard to come by. Now, a large University of Illinois project is filling the gap with colored corn.
Share:
FULL STORY

Consumers today insist on all-natural everything, and food dyes are no exception. Even if food manufacturers are willing to make the change, current sources of natural dyes are expensive and hard to come by. Now, a large University of Illinois project is filling the gap with colored corn.
"Most natural colors come from things like wine skins, red carrots, and beets. The problem with that is most of the product is wasted in extracting the coloring. It's not good value," says Jack Juvik, a geneticist in the crop sciences department at U of I.

Juvik and an interdisciplinary team have been experimenting with purple and blue corn varieties, noting that health-promoting pigments known as anthocyanins are located in the outer layers of the corn kernel. That makes a big difference, economically.
"You can process corn in different ways to remove only the outer layer. The rest can still be fed into the corn supply chain to make ethanol or grits or any of the other products corn is already used for. That outer layer becomes a value-added co-product," Juvik says.
The team has covered a lot of bases since the $1.4 million project began in 2014. For example, they identified the optimal milling process and demonstrated that corn-derived anthocyanins remain stable in food products. What's left is to find the most potent sources of the pigments for future corn breeding.

In a recent study, Juvik and his colleagues looked at anthocyanin type and concentration in nearly 400 genetically distinct lines of colored corn. They grew these lines in Illinois to see if anthocyanin concentration stayed constant from generation to generation -- a critical quality for breeding new varieties.
Peruvian types had some of the highest anthocyanin concentrations, and they held up throughout multiple generations. "That's good news. It means we can select for the trait we're interested in without worrying whether it will be expressed in new environments," Juvik says.
The next step will be getting those mighty Peruvian genes into high-yielding corn hybrids selected for production in the Midwest. If Juvik is successful, blue or purple corn could come to a field near you.

Story Source:
Materials provided by University of Illinois College of Agricultural, Consumer and Environmental Sciences. Note: Content may be edited for style and length.

Eating beans instead of beef would sharply reduce greenhouse gasses


Date:
May 23, 2017
Source:
Loma Linda University Adventist Health Sciences Center
Summary:
If Americans would eat beans instead of beef, the United States would immediately realize approximately 50 to 75 percent of its GHG reduction targets for the year 2020.
Share:
FULL STORY

A team of researchers from four American universities says the key to reducing harmful greenhouse gases (GHG) in the short term is more likely to be found on the dinner plate than at the gas pump.
The team, headed by Loma Linda University (LLU) researcher Helen Harwatt, PhD, suggests that one simple change in American eating habits would have a large impact on the environment: if Americans would eat beans instead of beef, the United States would immediately realize approximately 50 to 75 percent of its GHG reduction targets for the year 2020.
The researchers explained that beef cattle are the most GHG-intensive food to produce and that the production of legumes (beans, peas, etc.) results in one-fortieth the amount of GHGs as beef.
"Given the novelty, we would expect that the study will be useful in demonstrating just how much of an impact changes in food production can make and increase the utility of such options in climate-change policy," Harwatt said.

In a 10-page paper released May 12, Harwatt and her colleagues noted that dietary alteration for climate change mitigation is currently a hot topic among policymakers, academics and members of society at large. The paper, titled "Substituting beans for beef as a contribution towards U.S. climate change targets," can be found online.
In addition to reducing GHG, Harwatt and her team -- which included Joan Sabate, MD, DrPH; Gidon Eshel, PhD; the late Sam Soret, PhD; and William Ripple, PhD -- concluded that shifting from animal-sourced to plant-sourced foods could help avert global temperature rise.
Sabate, who serves as executive director of the Center for Nutrition, Healthy Lifestyle and Disease Prevention at LLU School of Public Health, said the findings are substantial.
"The nation could achieve more than half of its GHG reduction goals without imposing any new standards on automobiles or manufacturing," Sabate said.

The study, which was conducted while Harwatt was an environmental nutrition research fellow at Loma Linda University, also found that beef production is an inefficient use of agricultural land. Substituting beans for beef would free up 42 percent of U.S. cropland currently under cultivation -- a total of 1.65 million square kilometers or more than 400 million square acres, which is approximately 1.6 times the size of the state of California.

Harwatt applauds the fact that more than a third of American consumers are currently purchasing meat analogs: plant-based products that resemble animal foods in taste and texture. She says the trend suggests that animal-sourced meat is no longer a necessity.
"Given the scale of greenhouse gas reductions needed to avoid the worst impacts of climate change, are we prepared to eat beef analogs that look and taste like beef, but have a much lower climate impact?" she asks. "It looks like we'll need to do this. The scale of the reductions in greenhouse gas emissions needed doesn't allow us the luxury of 'business as usual' eating patterns."

Story Source:
Materials provided by Loma Linda University Adventist Health Sciences Center. Note: Content may be edited for style and length.

Eating a diet rich in fruit and vegetables could cut obesity risk


Date:
May 18, 2017
Source:
European Association for the Study of Obesity
Summary:
Pro-vegetarian diets (with a higher consumption of plant-based foods compared to animal-based foods) could provide substantial protection against obesity, according to new research.
Share:
FULL STORY

Pro-vegetarian diets (with a higher consumption of plant-based foods compared to animal-based foods) could provide substantial protection against obesity, according to new research presented at this year's European Congress on Obesity (ECO) in Porto, Portugal (17-20 May).
This observational study found that people who ate a high pro-vegetarian diet -- rich in food coming from plant sources like vegetables, fruit, and grains -- cut their risk of developing obesity by almost half compared to those who were least pro-vegetarian -- with a dietary pattern rich in animal food like meat, and animal fats.

Current evidence suggests that such a pro-vegetarian diet has a protective role in cardiovascular disease and diabetes, but little is known about its role on the risk of developing obesity in healthy populations.
The study was carried by University of Navarra student Julen Sanz under the supervision of Dr. Alfredo Gea and Professor Maira Bes-Rastrollo from the University of Navarra, and CIBERobn (Carlos III Institute of Health), Spain. They examined the association between varying degrees of pro-vegetarian (plant-based) diet and the incidence of obesity (body mass index; BMI >30) in over 16,000 healthy, non-obese adults from the SUN Cohort (Seguimiento Universidad de Navarra) -- a study tracking the health of Spanish graduates since 1999.

Participants completed detailed food questionnaires at the start of the study, and researchers used a pro-vegetarian diet index (PVI) to score each participant on the types of food they ate. Points were given for eating seven plant food groups -- vegetables, fruits, grains, nuts, olive oil, legumes (such as peas, beans, and lentils) and potatoes. Points were subtracted for five animal groups -- animal fats, dairy, eggs, fish and other seafood, and meat. Based on their scores, participants were categorised into five groups from the 20% with the least pro-vegetarian diet (quintile 1) to the 20% with the most (quintile 5), and followed for an average of 10 years.

During follow-up, 584 participants became obese. The researchers found that participants who closely followed a pro-vegetarian diet were less likely to become obese. Modelling showed that compared to the least-vegetarian participants (quintile 1), the most vegetarian (quintile 5) had a 43% reduced risk of developing obesity. For quintiles 2, 3 and 4, the reduced risk of obesity was 6%, 15% and 17%, respectively, versus quintile 1. The results held true irrespective of other influential factors including sex, age, alcohol intake, BMI, family history of obesity, snacking between meals, smoking, sleep duration, and physical activity.

The authors acknowledge that their findings show observational differences rather than evidence of cause and effect. They conclude: "Our study suggests that plant-based diets are associated with substantially lower risk of developing obesity. This supports current recommendations to shift to diets rich in plant foods, with lower intake of animal foods."

Story Source:
Materials provided by European Association for the Study of Obesity. Note: Content may be edited for style and length.

Sunflower genome sequence to provide roadmap for more resilient crops


Date:
May 23, 2017
Source:
University of Georgia
Summary:
Researchers have completed the first sunflower genome sequence. This new resource will assist future research programs using genetic tools to improve crop resilience and oil production.
Share:
FULL STORY

John M. Burke is a professor of plant biology at the University of Georgia.
Credit: Paul Efland/UGA
 
University of Georgia researchers are part of an international team that has published the first sunflower genome sequence. This new resource will assist future research programs using genetic tools to improve crop resilience and oil production.
They published their findings today in the journal Nature.
Known for its beauty and also as an important source of food, the sunflower is a global oil crop that shows promise for climate change adaptation because it can maintain stable yields across a wide variety of environmental conditions, including drought. However, assembling the sunflower genome has until recently been difficult, because it mostly consists of highly similar, related sequences.
The research team in North America and Europe sequenced the genome of the domesticated sunflower Helianthus annuus L. They also performed comparative and genome-wide analyses, which provide insights into the evolutionary history of Asterids, a subgroup of flowering plants that includes potatoes, tomatoes and coffee.

They identified new candidate genes and reconstructed genetic networks that control flowering time and oil metabolism, two major sunflower breeding traits, and found that the flowering time networks have been shaped by the past duplication of the entire genome. Their findings suggest that ancient copies of genes can retain their functionality and still influence traits of interest after tens of millions of years.
"As the first reference sequence of the sunflower genome, it's quite the accomplishment," said paper co-author John M. Burke, professor of plant biology and member of the UGA Plant Center. "The sunflower genome is over 40 percent larger than the maize [corn] genome, and roughly 20 percent larger than the human genome, and its highly repetitive nature made it a unique challenge for assembly."
Burke, whose lab studies the genomic basis of evolutionary divergence within the sunflower family, was involved in the genetic mapping upon which the genome assembly was based and oversaw the whole genome re-sequencing of the 80 sunflower lines described in the paper.

The international collaboration was led by Nicolas Langlade at the French National Institute for Agricultural Research in Toulouse, France, and included Loren Rieseberg of the University of British Columbia.
"Like many plant genomes, the sunflower genome is highly repetitive, though in this case the situation is a bit worse," Burke said. "The repetitive elements within the genome arose relatively recently, meaning that they haven't had time to differentiate. It's therefore like putting together a massive puzzle wherein many pieces look exactly the same, or nearly so."
The authors concluded that this research reinforces the sunflower as a model for ecological and evolutionary studies and climate change adaptation, and will accelerate breeding programs.
"It will greatly facilitate our work to understand the molecular mechanisms underlying key traits related to abiotic stress resistance -- things like drought, salinity and low nutrient resistance," Burke said. "This genome sequence will essentially serve as a genetic road map to pinpoint the genes underlying these sorts of traits."

Story Source:
Materials provided by University of Georgia. Note: Content may be edited for style and length.

BENCON AGRO-ALLIED PRODUCTS AND SERVICES: SHARUZ CONCEPT NIG. LTD

BENCON AGRO-ALLIED PRODUCTS AND SERVICES: SHARUZ CONCEPT NIG. LTD