Keeping You Up On The Lastest

Posts tagged ‘Big data’

Walmart Takes On Big Data

business suit_001

 

Much of the big data tools have been developed at the Walmart Labs, which was created after Walmart took over Kosmix in 2011. The products that were developed at Walmart Labs are ‘Social Genome’, ‘ShoppyCat and Get on the Shelf.

The Social Genome product allows Walmart to reach customers, or friends of customers, who have mentioned something online to inform them about that exact product and include a discount.
 Public data is combined from the web along with social data and proprietary data such as customer purchasing data and contact information. The result is , constantly changing, up-to-date knowledge base with hundreds of millions of entities and relationships. this provides  Walmart with a  better understanding of  the  what their customers are saying online. An example mentioned by Walmart Labs shows a woman tweeting regularly about movies. When she tweets “I love Salt”, Walmart is able to understand that she is talking about the movie Salt and not the condiment.

The Shoppycat product  developed by Walmart is able to recommend suitable products to Facebook users based on the hobbies and interests of their friends. 

Get on the Shelf  a crowd-sourcing solution that gave anyone the chance to promote his or her product in front of a large online audience. The best products would be sold at Walmart with the potential to suddenly reach millions of customers.

Techfest 2013 and Microsoft’s Predictive Whiteboard

 

Read more

Big Data and Hospitals

corporate office1_001

 

Big data is emerging in the hospitals and health industries because systems are collecting large amounts of data on patients every single day. The data comes for a variety of settings — clinical, billing, scheduling and so on. previously, a lot of that data was not leveraged to make patient care and hospital operations better. Recently though, there has been a shift to change that. According to Joel Dudley, MD, director of biomedical informatics for The Mount Sinai Medical Center in New York City, healthcare organizations have come to realize that all of their data can be captured and leveraged as a strategic asset.

Dr. Dudley said, “Big data is not just about storing huge amounts of data. It’s the ability to mine and integrate data, extracting new knowledge from it to inform and change the way providers, even patients, think about healthcare.”

Dr Jain of  Anil Jain, MD, CMIO of Explorys, a healthcare analytics company, and former senior executive director of information technology at Cleveland Clinic,  says Big data will change how physicians take care of patients at an individual level, fostering more personalized support right at a patient’s bedside.

“The analysis to deal with big data can produce valid and relevant data that is more current, which gives physicians the means and motivation to make the right decisions at the right time,” says Michael Corcoran, senior vice president and chief marketing officer of Information Builders, a business intelligence and software solutions company.

• The federal push for electronic health records has increased the number of hospitals and providers who use them, subsequently increasing the amount of electronic data generated.
• Newer reimbursement models and accountable care organizations need large amounts of information to be analyzed in order to more accurately understand what occurs with patients.
• New technology in general, including devices, implants and mobile applications on smartphones and tablets, has increased the amount of data available to providers.

According to Dr. Robicsek, MD, vice president of clinical and quality informatics at NorthShore University Health System in Evanston, Il, Big data also provides predictive models for the likelihood of readmission within 30-days which is another area NorthShore is targeting with its big data and informatics work.

 

 

Differential Privacy and Big Data

Snapshot_139

Microsoft research is developing Differential Privacy technology that would serve as a privacy guard and go-between when researchers search databases. It would ensure that no individual could be re-identified, protect privacy by keeping people anonymous in databases, but still help researchers sort big data.

Differential_Privacy_for_Everyone

 

Big Data

Big data is measured in terabytes, petabytes, or more. Data becomes “big data” when it  outgrows your current ability to process it, store it, and cope with it efficiently. Storage has become very cheap in the past ten years, allowing loads of data to be collected. However, our ability to actually process the loads of data quickly has not scaled as fast. Traditional tools to analyze and store data — SQL databases, spreadsheets, the Chinese abacus — were not designed to deal with vast data problems. The amount of information in the world is now measured in zettabytes. A zettabyte, which is 1021 bytes (that is 1 followed by twenty-one zeroes), is a big number. Imagine writing three paragraphs describing your favorite movie – that’s about 1 kilobyte. Next, imagine writing three paragraphs for every grain of sand on the earth — that amount of information is in the zettabyte range.

The best tool available today for processing and storing herculean amounts of big data is Hadoop.  Hundreds or thousands of computers are thrown at the big data problem, rather than using single computer.

Hadoop makes data mining, analytics, and processing of big data cheap and fast. Hadoop can take most of your big data problems and unlock the answers, because you can keep all your data, including all of your historical data, and get an answer before your children graduate college.

Apache Hadoop is an open-source project inspired by research of Google.  Hadoop is named after the stuffed toy elephant of the lead programmer’s son. In Hadoop parlance, the group of coordinated computers is called a cluster, and the individual computers in the cluster are called nodes.

Tag Cloud