Category Archives: Internet

My First Fact-Finding Mission and Home Purchase Was Easy

My husband and I have been talking about moving to Colorado, and one day he came home and said it was now going to become a reality if I wanted it to be a reality. He said that his boss thought he would be a perfect fit at their Denver branch if he was interested in it. Boy, were we ever interested! So, my husband sent me on a fact-finding mission by flying me out to meet up with a realtor in Denver so that she could show me what types of places were available. I was so excited, yet very nervous as well. I usually have my husband with me when I do important things like that, and I hoped that I could handle it all myself.

When I married my guy, he already had a house. I had been living in an apartment for 10 years prior to meeting him. So I had absolutely zero experience when it came to purchasing any type of property. But he gave me a few tips here and there before I left. And I also checked out a couple of websites to learn a bit, too.

Find ancient solutions to modern climate problems

Washington State University archaeologists are at the helm of new research using sophisticated computer technology to learn how past societies responded to climate change.

Their work, which links ancient climate and archaeological data, could help modern communities identify new crops and other adaptive strategies when threatened by drought, extreme weather and other environmental challenges.

In a new paper in the Proceedings of the National Academy of Sciences, Jade d’Alpoim Guedes, assistant professor of anthropology, and WSU colleagues Stefani Crabtree, Kyle Bocinsky and Tim Kohler examine how recent advances in computational modeling are reshaping the field of archaeology.

“For every environmental calamity you can think of, there was very likely some society in human history that had to deal with it,” said Kohler, emeritus professor of anthropology at WSU. “Computational modeling gives us an unprecedented ability to identify what worked for these people and what didn’t.”

Leaders in agent-based modeling

Kohler is a pioneer in the field of model-based archaeology. He developed sophisticated computer simulations, called agent-based models, of the interactions between ancestral peoples in the American Southwest and their environment.

He launched the Village Ecodynamics Project in 2001 to simulate how virtual Pueblo Indian families, living on computer-generated and geographically accurate landscapes, likely would have responded to changes in specific variables like precipitation, population size and resource depletion.

By comparing the results of agent-based models against real archeological evidence, anthropologists can identify past conditions and circumstances that led different civilizations around the world into periods of growth and decline.

‘Video game’ plays out to logical conclusion

Agent-based modeling is also used to explore the impact humans can have on their environment during periods of climate change.

One study mentioned in the WSU review demonstrates how drought, hunting and habitat competition among growing populations in Egypt led to the extinction of many large-bodied mammals around 3,000 B.C. In addition, d’Alpoim Guedes and Bocinsky, an adjunct faculty member in anthropology, are investigating how settlement patterns in Tibet are affecting erosion.

“Agent-based modeling is like a video game in the sense that you program certain parameters and rules into your simulation and then let your virtual agents play things out to the logical conclusion,” said Crabtree, who completed her Ph.D. in anthropology at WSU earlier this year. “It enables us to not only predict the effectiveness of growing different crops and other adaptations but also how human societies can evolve and impact their environment.”

Modeling disease- and drought-tolerant crops

Species distribution or crop-niche modeling is another sophisticated technology that archeologists use to predict where plants and other organisms grew well in the past and where they might be useful today.

Bocinsky and d’Alpoim Guedes are using the modeling technique to identify little-used or in some cases completely forgotten crops that could be useful in areas where warmer weather, drought and disease impact food supply.

One of the crops they identified is a strain of drought-tolerant corn the Hopi Indians of Arizona adapted over the centuries to prosper in poor soil.

Prognoses for a sustainable future

Computer models play a significant role in environmental policy, but offer only a partial picture of the industrial system Whether it’s electric automobiles, renewable energy, carbon tax or sustainable consumption: Sustainable development requires strategies that meet people’s needs without harming the environment. Before such strategies are implemented, their potential impact on environment, economy, and society needs to be tested. These tests can be conducted with the help of computer models that depict future demographic and economic development and that examine the interplay between industry and the climate and other essential natural systems. Together with his Norwegian and US colleagues, junior professor Dr. Stefan Pauliuk at the Faculty of Environment and Natural Resources at the University of Freiburg undertook the hitherto most comprehensive review of five major so-called integrated assessment models. Published in the scientific journal Nature Climate Change, the team’s results show that these models exhibit substantial deficits in their representation of the industrial system, which may lead to flawed estimates of the potential environmental impacts and societal benefits of new technologies and climate policies.

Integrated assessment models create scenarios for the most cost-effective transition toward a sustainable supply of materials and energy while taking the planetary boundaries into consideration. “The scenarios generated by the models are an important instrument for environmental policy-making,” says Pauliuk. “They show that it is technically feasible to achieve a central goal in global climate policy: Namely, to limit average global warming to a maximum of two degrees Celsius compared to the level at the beginning of the Industrial Era.” As a consequence, the model results were important during the preparatory negotiations leading up to the Paris Agreement that came into effect in November 2016 with the intention of mitigating climate change. The models’ results also play a significant role in the latest assessment report issued by the Intergovernmental Panel on Climate Change (IPCC), where they are used to link the mitigation options described for different sectors such as buildings, transport, or energy supply.

“Because the models’ results are so important to decision makers, the questions arises about their validity and robustness,” says Pauliuk. As a result, the researchers paid particular attention to the way in which the models represent the industrial system; that is, the global value chain of production, processing, and use of energy, materials, and consumer goods as well as recycling. The industrial system is the source of all human-made goods. At the same time, it is also the origin of all emissions to the natural environment. But the representation of the industrial system in these models is incomplete, according to the researchers. “In particular, the cycle of materials, for instance of iron and copper, but also the representation of urban infrastructure is completely missing,” explains Pauliuk. This fact may lead may limit the predictive capacity of the models bot more research is needed: “It remains to be shown how ignoring core parts of the industrial system influences the feasibility of certain scenarios to mitigate climate change. In addition, important strategies to reduce emissions such as recycling, material efficiency, or urban density have not been considered at all.” Researchers are now called to expand the models to more accurately describe the cycle of materials and other details concerning the industrial system. The ultimate goal is to obtain more realistic prognoses for climate and resource policies.

Invasion mimics a drunken walk

A theory that uses the mathematics of a drunken walk describes ecological invasions better than waves, according to Tim Reluga, associate professor of mathematics and biology, Penn State.

The ability to predict the movement of an ecological invasion is important because it determines how resources should be spent to stop an invasion in its tracks. The spread of disease such as the black plague in Europe or the spread of an invasive species such as the gypsy moth from Asia are examples of ecological invasions.

Two camps of scientists work on this problem — mathematicians and ecologists. Mathematicians focus on creating models to describe invasion waves, while ecologists go to the field to measure observations of invasions, building computer simulations to predict the phenomenon they observe. Ideally both camps should agree on the underlying theory to explain their model results. But an ongoing argument continues among these scientists due to one seemingly simple detail — how randomness affects an ecological invasion. Reluga hopes his approach will settle the argument, reconciling mathematical models with ecological observations.

“I hope this paper makes things clear that different kinds of randomness have different effects on invasions,” Reluga said.

Previously, ecologists made inconsistent theories about how randomness influences an invasion. Some said it sped up while others said it slowed down an invasion. This is in contrast to mathematicians who said randomness had no affect on invasions, but randomness affects an ecological invasion in a number of different ways.

Reluga’s work categorizes this randomness into three factors — spatial, demographic, and temporal. The invasion of a forest population, such as the spread of acorn trees in England and Scotland at the end of the last ice age, can show how all three random factors affect this ecological invasion. The presence of squirrels in the forest can increase spatial randomness as squirrels disperse acorns further away from trees. Demographic randomness describes the variation in the average number of acorns trees produce. Finally, temporal randomness refers to how regularly the trees disperse seeds through time.

For his research Reluga constructed a mathematical model of an ecological invasion that behaves like a random walk, or movement that resembles the way someone who has had too much to drink tries to walk. He then showed the model replicates four key properties observed in computer simulations — increasing spatial and temporal randomness sped up an invasion, and increasing demographic randomness and population density slowed down an invasion. By mathematically proving his model results replicated these properties, he concluded his take on spatial, demographic, and temporal random factors resembles the real world. Reluga’s results, published in Theoretical Population Biology, agree with what ecologists observe in the field and mathematicians predict with models, covering a wide class of invasion phenomenon.

“This is the way we should be thinking about the problem of randomness in ecological invasions,” Reluga said. “If we think about it in this different frame, all the results make natural sense.”

Help design physical therapy regimens

After a stroke, patients typically have trouble walking and few are able to regain the gait they had before suffering a stroke. Researchers funded by the National Institute of Biomedical Imaging and Bioengineering (NIBIB) have developed a computational walking model that could help guide patients to their best possible recovery after a stroke. Computational modeling uses computers to simulate and study the behavior of complex systems using mathematics, physics, and computer science. In this case, researchers are developing a computational modeling program that can construct a model of the patient from the patient’s walking data collected on a treadmill and then predict how the patient will walk after different planned rehabilitation treatments. They hope that one day the model will be able to predict the best gait a patient can achieve after completing rehabilitation, as well as recommend the best rehabilitation approach to help the patient achieve an optimal recovery.

Currently, there is no way for a clinician to determine the most effective rehabilitation treatment prescription for a patient. Clinicians cannot always know which treatment approach to use, or how the approach should be implemented to maximize walking recovery. B.J. Fregly, Ph.D. and his team (Andrew Meyer, Ph.D., Carolynn Patten, PT., Ph.D., and Anil Rao, Ph.D.) at the University of Florida developed a computational modeling approach to help answer these questions. They tested the approach on a patient who had suffered a stroke.

The team first measured how the patient walked at his preferred speed on a treadmill. Using those measurements, they then constructed a neuromusculoskeletal computer model of the patient that was personalized to the patient’s skeletal anatomy, foot contact pattern, muscle force generating ability, and neural control limitations. Fregly and his team found that the personalized model was able to predict accurately the patient’s gait at a faster walking speed, even though no measurements at that speed were used for constructing the model.

“This modeling effort is an excellent example of how computer models can make predictions of complex processes and accelerate the integration of knowledge across multiple disciplines,”says Grace Peng, Ph.D., director of the NIBIB program in Mathematical Modeling, Simulation and Analysis.

Fregly and his team believe this advance is the first step toward the creation of personalized neurorehabilitation prescriptions, filling a critical gap in the current treatment planning process for stroke patients. Together with devices that would ensure the patient is exercising using the proper force and torque, personalized computational models could one day help maximize the recovery of patients who have suffered a stroke.

Harmful impact of news hoaxes in society

The Observatory on Social Media at Indiana University has launched a powerful new tool in the fight against fake news.

The tool, called Hoaxy (http://hoaxy.iuni.iu.edu/), visualizes how claims in the news — and fact checks of those claims — spread online through social networks. The tool is built upon earlier work at IU led by Filippo Menczer, a professor and director of the Center for Complex Networks and Systems Research in the IU School of Informatics and Computing.

“In the past year, the influence of fake news in the U.S. has grown from a niche concern to a phenomenon with the power to sway public opinion,” Menczer said. “We’ve now even seen examples of fake news inspiring real-life danger, such as the gunman who fired shots in a Washington, D.C., pizza parlor in response to false claims of child trafficking.”

Previous tools from the observatory at IU include BotOrNot, a system to assess whether the intelligence behind a Twitter account is more likely a person or a computer, and a suite of online tools that allows anyone to analyze the spread of hashtags across social networks.

In response to the growth of fake news, several major web services are making changes to curtail the spread of false information on their platforms. Google and Facebook recently banned the use of their advertisement services on websites that post fake news, for example. Facebook also rolled out a system last week through which users can flag stories they suspect are false, which are then referred to third-party fact-checkers.

Over the past several months, Menczer and colleagues were frequently cited as experts on how fake news and misinformation spread in outlets such as PBS Newshour, Scientific American, The Atlantic, Reuters, Australian Public Media, NPR and BuzzFeed.

Giovanni Luca Ciampaglia, a research scientist at the IU Network Science Institute, coordinated the Hoaxy project with Menczer. Ciampaglia said a user can now enter a claim into the service’s website and see results that show both incidents of the claim in the media and attempts to fact-check it by independent organizations such as snopes.com, politifact.com and factcheck.org. These results can then be selected to generate a visualization of how the articles are shared across social media.

The site’s search results display headlines that appeared on sites known to publish inaccurate, unverified or satirical claims based upon lists compiled and published by reputable news and fact-checking organizations.

A search of the terms “cancer” and “cannabis,” for example, turns up multiple claims that cannabis has been found to cure cancer, a statement whose origins have been roundly debunked by the reputable fact-checking website snopes.com. A search of social shares of articles that make the claim, however, shows a clear rise in people sharing the story, with under 10 claims in July rising to hundreds by December.

Data sets for easier analysis

One way to handle big data is to shrink it. If you can identify a small subset of your data set that preserves its salient mathematical relationships, you may be able to perform useful analyses on it that would be prohibitively time consuming on the full set.

The methods for creating such “coresets” vary according to application, however. Last week, at the Annual Conference on Neural Information Processing Systems, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and the University of Haifa in Israel presented a new coreset-generation technique that’s tailored to a whole family of data analysis tools with applications in natural-language processing, computer vision, signal processing, recommendation systems, weather prediction, finance, and neuroscience, among many others.

“These are all very general algorithms that are used in so many applications,” says Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT and senior author on the new paper. “They’re fundamental to so many problems. By figuring out the coreset for a huge matrix for one of these tools, you can enable computations that at the moment are simply not possible.”

As an example, in their paper the researchers apply their technique to a matrix — that is, a table — that maps every article on the English version of Wikipedia against every word that appears on the site. That’s 1.4 million articles, or matrix rows, and 4.4 million words, or matrix columns.

That matrix would be much too large to analyze using low-rank approximation, an algorithm that can deduce the topics of free-form texts. But with their coreset, the researchers were able to use low-rank approximation to extract clusters of words that denote the 100 most common topics on Wikipedia. The cluster that contains “dress,” “brides,” “bridesmaids,” and “wedding,” for instance, appears to denote the topic of weddings; the cluster that contains “gun,” “fired,” “jammed,” “pistol,” and “shootings” appears to designate the topic of shootings.

Joining Rus on the paper are Mikhail Volkov, an MIT postdoc in electrical engineering and computer science, and Dan Feldman, director of the University of Haifa’s Robotics and Big Data Lab and a former postdoc in Rus’s group.

The researchers’ new coreset technique is useful for a range of tools with names like singular-value decomposition, principal-component analysis, and nonnegative matrix factorization. But what they all have in common is dimension reduction: They take data sets with large numbers of variables and find approximations of them with far fewer variables.

In this, these tools are similar to coresets. But coresets simply reduce the size of a data set, while the dimension-reduction tools change its description in a way that’s guaranteed to preserve as much information as possible. That guarantee, however, makes the tools much more computationally intensive than coreset generation — too computationally intensive for practical application to large data sets.

The researchers believe that their technique could be used to winnow a data set with, say, millions of variables — such as descriptions of Wikipedia pages in terms of the words they use — to merely thousands. At that point, a widely used technique like principal-component analysis could reduce the number of variables to mere hundreds, or even lower.

The researchers’ technique works with what is called sparse data. Consider, for instance, the Wikipedia matrix, with its 4.4 million columns, each representing a different word. Any given article on Wikipedia will use only a few thousand distinct words. So in any given row — representing one article — only a few thousand matrix slots out of 4.4 million will have any values in them. In a sparse matrix, most of the values are zero.

Crucially, the new technique preserves that sparsity, which makes its coresets much easier to deal with computationally. Calculations become lot easier if they involve a lot of multiplication by and addition of zero.

Differentiate between people with the same name

This conundrum occurs in a wide range of environments from the bibliographic — which Anna Hernandez authored a specific study? — to the law enforcement — which Robert Jones is attempting to board an airplane flight?

Two computer scientists from the School of Science at Indiana University-Purdue University Indianapolis and a Purdue University doctoral student have developed a novel-machine learning method to provide better solutions to this perplexing problem. They report that the new method is an improvement on currently existing approaches of name disambiguation because the IUPUI method works on streaming data that enables the identification of previously unencountered John Smiths, Maria Garcias, Wei Zhangs and Omar Alis.

Existing methods can disambiguate an individual only if the person’s records are present in machine-learning training data, whereas the new method can perform non-exhaustive classification so that it can detect the fact that a new record which appears in streaming data actually belongs to a fourth John Smith, even if the training data has records of only three different John Smiths. “Non-exhaustiveness” is a very important aspect for name disambiguation because training data can never be exhaustive, because it is impossible to include records of all living John Smiths.

“Bayesian Non-Exhaustive Classification — A Case Study: Online Name Disambiguation using Temporal Record Streams” by Baichuan Zhang, Murat Dundar and Mohammad al Hasan is published in Proceedings of the 25th International Conference on Information and Knowledge Management. Zhang is a Purdue graduate student. Dundar and Hasan are IUPUI associate professors of computer science and experts in machine learning.

“We looked at a problem applicable to scientific bibliographies using features like keywords, and co-authors, but our disambiguation work has many other real-life applications — in the security field, for example,” said Hasan, who led the study. “We can teach the computer to recognize names and disambiguate information accumulated from a variety of sources — Facebook, Twitter and blog posts, public records and other documents — by collecting features such as Facebook friends and keywords from people’s posts using the identical algorithm. Our proposed method is scalable and will be able to group records belonging to a unique person even if thousands of people have the same name, an extremely complicated task.

“Our innovative machine-learning model can perform name disambiguation in an online setting instantaneously and, importantly, in a non-exhaustive fashion,” Hasan said. ” Our method grows and changes when new persons appear, enabling us to recognize the ever-growing number of individuals whose records were not previously encountered. Also, some names are more common than others, so the number of individuals sharing that name grows faster than other names. While working in non-exhaustive setting, our model automatically detects such names and adjusts the model parameters accordingly.”

Machine learning employs algorithms — sets of steps — to train computers to classify records belonging to different classes. Algorithms are developed to review data, to learn patterns or features from the data, and to enable the computer to learn a model that encodes the relationship between patterns and classes so that future records can be correctly classified. In the new study, for a given name value, computers were “trained” by using records of different individuals with that name to build a model that distinguishes between individuals with that name, even individuals about whom information had not been included in the training data previously provided to the computer.

Making artificial and real cells talk

The classic Turing test evaluates a machine’s ability to mimic human behavior and intelligence. To pass, a computer must fool the tester into thinking it is human — typically through the use of questions and answers. But single-celled organisms can’t communicate with words. So this week in ACS Central Science, researchers demonstrate that certain artificial cells can pass a basic laboratory Turing test by “talking” chemically with living bacterial cells.

Sheref S. Mansy and colleagues proposed that artificial life would need to have the ability to interact seamlessly with real cells, and this could be evaluated in much the same way as a computer’s artificial intelligence is assessed. To demonstrate their concept, the researchers constructed nano-scale lipid vessels capable of “listening” to chemicals that bacteria give off. The artificial cells showed that they “heard” the natural cells by turning on genes that made them glow. These artificial cells could communicate with a variety of bacterial species, including V. fischeri, E. coli and P. aeruginosa. The authors note that more work must be done, however, because only one of these species engaged in a full cycle of listening and speaking in which the artificial cells sensed the molecules coming from the bacteria, and the bacteria could perceive the chemical signal sent in return.

Developing monitoring system for seniors

“When faced with problems of the elderly in our closest family, it is us who experience major stress, not them,” says Egidijus Kazanavicius, Professor at Kaunas University of Technology (KTU), Director at the Centre of Real Time Computer Systems. Kazanavicius is heading the team of researchers from KTU and Lithuanian University of Health Sciences (LSMU), who are developing the monitoring system for seniors: upon registering a fall of a person, the system sends a notification to the carers.

“Falls are the leading cause of death in the elderly population and are very common problem in geriatrics, symptomatic to a wide variety of health conditions. Besides causing physical injuries, falls lower person’s self-confidence to move independently, and are often a reason of various psychological problems,” says Dr Vita Lesauskaite, researcher at LSMU.

Collaborating, KTU and LSMU researchers created a prototype of a monitoring system for seniors GRIUTIS, consisting of a set of fixed sensors placed in premises, and of the software. When sensors register a change in a person’s behaviour or in his or her position, the alert is being sent to their family and / or carers.

The next step for the researchers is patenting of technologies and product commercialisation. It is planned that the senior monitoring system GRIUTIS will be used in geriatrics clinics as soon as the next year. Lithuanian Research Council has allocated funds for the realisation of the project.