July 29, 2014
Can education catch up to progress? Perhaps, especially when corporations take an interest. Fortune discusses “Educating the ‘Big Data’ Generation.” As companies try to move from simply collecting vast amounts of data to putting that information to use, they find a serious dearth of qualified workers in the field. In fact, Gartner predicted in 2012 that 4.4 million big-data IT jobs would be created globally by 2015 (1.9 million in the U.S.). Schools are now working to catch up with this demand, largely as the result of prodding from the big tech companies.
The field of big data collection and analysis presents a previously rare requirement—workers that understand both technology and business. Reporter Katherine Noyes cites MIT’s Erik Brynjolfsson, who will be teaching a course on big data this summer:
“We have more data than ever,’ Brynjolfsson said, ‘but understanding how to apply it to solve business problems needs creativity and also a special kind of person.’ Neither the ‘pure geeks’ nor the ‘pure suits’ have what it takes, he said. ‘We need people with a little bit of each.’”
Over at Arizona State, which boasts year-old master’s and bachelor’s programs in data analytics, Information Systems chair Michael Goul agrees:
“’We came to the conclusion that students needed to understand the business angle,’ Goul said. ‘Describing the value of what you’ve discovered is just as key as discovering it.’”
In order to begin meeting this new need for business-minded geeks (or tech-minded business people), companies are helping schools develop programs to churn out that heretofore suspect hybrid. For example, Noyes writes:
“MIT’s big-data education programs have involved numerous partners in the technology industry, including IBM […], which began its involvement in big data education about four years ago. IBM revealed to Fortune that it plans to expand its academic partnership program by launching new academic programs and new curricula with more than twenty business schools and universities, to begin in the fall….
“Business analytics is now a nearly $16 billion business for the company, IBM says—which might be why it is interested in cultivating partnerships with more than 1,000 institutions of higher education to drive curricula focused on data-intensive careers.”
Whatever forms these programs, and these jobs, ultimately take, one thing is clear: for those willing and able to gain the skills, the field of big data is wide open. Anyone with a strong love of (and aptitude for) working with data should consider entering the field now, while competition for qualified workers is so very high.
Cynthia Murrell, July 29, 2014
March 9, 2014
If Wolfram Alpha had been around when I was in high school it would have made my math and science homework a whole lot easier. Other than solving physics equations, Wolfram Alpha can be used for a whole lot more. The smart database just released a new endeavor called the Documentation Center.
The Documentation Center is still in the preliminary version, but it can be used for:
“The Wolfram System’s unified computation and dynamic document architecture makes possible a new level of interactive presentation—notably allowing finished “slides” on which full interactive input and dynamic computation can still be done. The Wolfram Language’s cell-structured documents also conveniently allow calculations leading up to graphics or other elements to be maintained in the underlying document, but hidden for presentation.”
A whole new interactive level with data is a great idea! It makes it more interesting and Wolfram Alpha gives the chance to improve its quality. Browsing through the new Documentation Center, however, is confusing. It’s not explained how it can be used, only what it can do. Perhaps it requires a purchased membership. It looks like a system for the one percent.
March 3, 2014
Usually when money and students are mentioned in headlines, it is about student debt and the rising cost of tuition. Oracle has a more positive headline about this topic: “Metropolitan State University Reduces Students’ Nonpayments From US$4 Million To US$700,000 Per Year.”
Metropolitan State University is based in Minnesota. The college was experiencing a $4 million loss in students not paying their tuition. The solution was to deploy Oracle RightNow to improve communication channels with students and establish a student-relationship system to keep track of conversations.
After deploying Oracle RightNow, the immediate problems were resolved. It provided a centralized system that sent quick and individualized responses, improved efficiency, reduced application tracking, and most importantly send out trigger messages to students reducing student nonpayments from $4 million to $700,000 a year.
Metro State selected Oracle, because:
“ ‘We chose Oracle RightNow for its extensive reporting and analytics capabilities, which are far better than any other higher education customer relationship management tool on the market. Having the ability to easily put rich data in the hands of our advisors has really propelled us to the next level,’ said Andrew Melendres, vice president, student affairs and enrollment management, Metropolitan State University.”
Universities are slashing budgets left and right. Gaining several million in revenue from unpaid student tuition boosted Metro State’s budget and made them an example for other schools. We would expect that Harvard, Yale, and Stanford will follow Metro State’s lead.
February 18, 2014
Yay, free books! We love pointing out free resources, and now two textbooks relevant to content processing are available (in embedded PDF form) without charge via Yumpu.com. One, Information Technology for Management, from Turban, McLean, and Wetherbe, aims to bridge a crucial knowledge gap that plagues many businesses. It begins its task of educating managers in the mysteries of IT with a chapter titled, “Strategic Use of Information Technology in the Digital Economy.” As one might expect from a textbook, each chapter includes discussion questions and exercises. The authors illustrate their points with real-life examples from recognizable companies.
Another book, Data Mining Methods and Models, by Daniel T. Larose at Central Connecticut State University, tackles the building of data models. Like the above book, questions and exercises are provided for your enjoyment. This is the second book in a series. Its preface specifies that this volume:
“…explores the process of data mining from the point of view of model building: the development of complex and powerful predictive models that can deliver actionable results for a wide range of business and research problems…. [It provides]
*Models and techniques to uncover hidden nuggets of information
*Insight into how the data mining algorithms really work
*Experience of actually performing data mining on large data sets.”
If either of these sound like they could be of use to you (or someone you’ve been trying to explain these things to), don’t miss out on these free resources.
Cynthia Murrell, February 18, 2014
February 11, 2014
The Probability and Statistics Cookbook from Matthias Vallentin is a free statistics text. The creator, Vallentin, is a doctoral student at UC Berkeley who works with Vern Paxson in his studies of computer science. While there Vallentin has worked as a teaching assistant in undergraduate computer security course. Vallentin also works for the International Computer Science Institute. His research in network intrusion and network forensics began in his undergraduate career in Germany. The “cookbook” is explained in the article,
“The cookbook aims to be language agnostic and factors out its textual elements into a separate dictionary. It is thus possible to translate the entire cookbook without needing to change the core LaTeX source and simply providing a new dictionary file. Please see the github repository for details. The current translation setup is heavily geared to Roman languages, as this was the easiest way to begin with. Feel free to make the necessary changes to the LaTeX source to relax this constraint.”
The overview provides screenshots that make it clear the cookbook is more interested in the mathematical crux rather than elaborate clarifications. The author is open to pull requests in order to lengthen the cookbook, but in the meanwhile the LaTeX source code can be found on github.
Chelsea Kerwin, February 11, 2014
January 31, 2014
“Are Gifted Children Getting Lost In The Shuffle?” is a question asked by David Lubinski when he conducted a study that tracked 320 gifted children from age 13 until 38. News At Vanderbilt details how 203 of the children earned a master’s degree or higher, of which 142 earned a doctorate. Most of the children had successful careers, and went on to become doctors, attorneys, software engineers, Fortune 500 senior leaders, and even an advisor to the US President.
The article explains:
“Despite their remarkable success, researchers concluded that the profoundly gifted students had experienced roadblocks along the way that at times prevented them from achieving their full potential. Typical school settings were often unable to accommodate the rapid rate at which they learned and digested complex material. When students entered elementary and high school classrooms on day one having already mastered the course material, teachers often shifted focus away from them to those struggling with the coursework. This resulted in missed learning opportunities, frustration and underachievement, particularly for the exceptionally talented, the researchers suggest.”
The article continues that gifted children were left to their own devices, because they understood course material and that schools fail to have programs for gifted children, though they have special education classes. Lubinski concluded that if children were challenged it would motivate them to push themselves farther.
No argument with the study, except did they take economic, environment, and socioeconomic levels into consideration? What other factors helped these children achieve success and how exactly was a “roadblock” defined?
Whitney Grace, January 31, 2014
January 7, 2014
There is a troubling article over at Priceonomics titled, “Fraud in the Ivory Tower.” The post begins with the tale of former Tilburg University professor Diederik Stapel, who was found in 2012 to have fabricated or manipulated data in at least 30 papers that had been published in peer-reviewed journals. This case is a dramatic example of a growing problem; Fang Labs reports that instances of fraud or suspected fraud tripled between the 2002-2006 period and 2007-2011. Why the uptick?
We’re reminded that the famed “publish or perish” academic culture grows ever more demanding. At the same time, policies at scientific journals often mean that research integrity takes a back seat to provocative assertions.
“According to experimental psychologist Chris Chambers, high-impact journals (particularly in the field of psychology) look for results that are ‘exciting, eye-opening, even implausible.’ Novelty pieces. As psychologist Joseph Simmons told the science journal Nature: ‘When we review papers, we’re often making authors prove that their findings are novel or interesting. We’re not often making them prove that their findings are true.’”
Lovely. The write-up goes on to reveal that retractions are on the rise; the PubMed database contained only three publication retractions in 2000, but 180 in 2009. What’s more, these retractions are occurring most often at journals with high prestige (as measured by how often its papers are cited in other works).
The article states:
“Again, it is possible that this increase is caused by a stronger online watchdog culture. But regardless of whether the fraud is new or newly discovered, the case of Diederik Stapel reveals the ugly underbelly of scientific research. The pressure to publish frequently in prestigious journals has made it more likely for researchers to cut corners and manipulate data.”
The piece naturally concludes with a call for improvement. In doing so, the writer supplies this link to an article advocating open access to academic papers. Interesting.
Cynthia Murrell, January 07, 2014
December 2, 2013
Here is an interesting approach to academic freedom. The Chicago Tribune informs us that “Chicago State University Wants Faculty Blog Shut Down.” The blog in question, the Faculty Voice Blog, has dared to be critical of the University administration, so the school and its lawyers have sent an official “cease and desist” notice. Rather than engage the unhappy professors in civil debate, it seems the school has suddenly decided it has a problem with the blog’s use of its trademarks and trade names. (The blog has been active, and using these “trade names and marks,” since 2009.) The notice also characterizes the posts as unprofessional and uncivil, thereby violating University policy. No word on why they feel their policy trumps the First Amendment to the U.S. Constitution.
Reporter Juan Perez Jr. cites Phillip Beverly, the associate political science professor who founded the blog. The article relates:
“Roughly eight faculty members contributed to the site, Beverly said, under their own names or pseudonyms. The website used a picture of an on-campus Chicago State University sign and ‘CSU’ hedge sculpture. But Monday evening, after receiving the letter, Beverly changed the site’s name to ‘Crony State University’ and replaced its main image with a building from another campus.”
That’s one way to deal with specious charges (don’t worry, he is also consulting a lawyer). Beverly started the website specifically to provide a forum for discussing problems at the University, like disappointing graduation rates, poor money management, and inadequate leadership. For its part, the school seems to feel it has the right to dictate what information makes it into the public sphere.
“Last year, Chicago State officials instructed faculty and staff that only authorized university representatives could share information with the media—and that everything from opinion pieces to social media communications could require prior approval.”
This is not the first time Chicago State has run afoul of the First Amendment. Just last year, a federal judge decided the school had violated rights granted by that hallowed document when it fired another outspoken professor, Steven Moore. Perhaps University administrators should audit a few classes on constitutional law.
Cynthia Murrell, December 02, 2013
October 18, 2013
Science magazine has published an important article about today’s open-access academic journals— Who’s Afraid of Peer Review?” I highly recommend reading the entire piece, but I’ll share some highlights here. Journalist John Bohannon begins:
“On 4 July, good news arrived in the inbox of Ocorrafoo Cobange, a biologist at the Wassee Institute of Medicine in Asmara. It was the official letter of acceptance for a paper he had submitted 2 months earlier to the Journal of Natural Pharmaceuticals, describing the anticancer properties of a chemical that Cobange had extracted from a lichen.
“In fact, it should have been promptly rejected. Any reviewer with more than a high-school knowledge of chemistry and the ability to understand a basic data plot should have spotted the paper’s short-comings immediately. Its experiments are so hopelessly flawed that the results are meaningless. I know because I wrote the paper.”
You see, Science performed an elaborate sting operation across the rapidly growing field of open-access journal publication. Most of these journals make money by charging authors upon acceptance of their articles; Bohannon began to suspect a number of these publications were motivated to accept papers that would not stand up to rigorous peer review, despite assertions on their websites to the contrary. What he found is truly disheartening.
See the article for the methodology behind the fake paper and Bohannon’s submissions procedure, both of which are informative in themselves. The results are disheartening. When the article went to press, far more journals (157) had accepted the bogus paper than rejected it (98). Even respected publishers like Elsevier and Sage were found to host at least one of these questionable journals. Most of the publishers that performed any review at all focused on mechanical issues like formatting, not substance. What is going on here? Bohannon offers:
“A striking picture emerges from the global distribution of open-access publishers, editors, and bank accounts. Most of the publishing operations cloak their true geographic location. They create journals with names like the American Journal of Medical and Dental Sciences or the European Journal of Chemistry to imitate—and in some cases, literally clone—those of Western academic publishers. But the locations revealed by IP addresses and bank invoices are continents away: Those two journals are published from Pakistan and Turkey, respectively, and both accepted the paper…
“About one-third of the journals targeted in this sting are based in India—overtly or as revealed by the location of editors and bank accounts—making it the world’s largest base for open-access publishing; and among the India-based journals in my sample, 64 accepted the fatally flawed papers and only 15 rejected it.”
So, opportunists in the developing world have seized upon faux-reviewed academic publishing as the way to turn a PC and an Internet connection into profits. Good for them, bad for science. How does one know when Bing or Google links to fake info? Does it matter anymore? I have to think it does. I hope that people in the field, like Bohannon, who care about open access to legitimate research will find a way to counter this flood of bad information. In the meantime, well… don’t believe every link you read.
Cynthia Murrell, October 18, 2013
October 14, 2013
Academic publishing juggernaut Elsevier produces a resource for journal editors called, reasonably enough, Editors’ Update. Retraction Watch calls our attention to the Update’s recent series on ethics in, “Editor: ‘Close to 10% of the Papers We Receive Show Some Sign of Academic Misconduct’.” While the Elsevier ethics series also addresses topics like bias and research misconduct, it is the prevalence of plagiarism that concerns Retraction Watch’s Ivan Oransky. He pulls this quote from a piece in the series by Henrik Rudolph, editor in chief of Applied Surface Science:
“Close to 10% of the papers we receive show some sign of academic misconduct, but since the total number of submissions is increasing, the absolute number is also rising. The most common issue we see is too large an overlap with previously published material, i.e. plagiarism. Cases are evenly divided between self-plagiarism and regular plagiarism. These submissions are most often identified in the editorial phase (by the managing editor or editor) and are rejected before they are sent out for review. iThenticate is an important instrument for detecting academic misconduct, but often common sense is an equally important instrument. . . . If it looks fishy it probably is fishy.”
Examples of fishy-looking content include a sudden shift from U.K. to U.S. English and spelling errors copied straight from the original. Oransky supplies descriptions of the articles to be found in part one of the Elsevier Ethics Special Edition, as well as a brief blurb on what to expect from part two. Check it out for more on this unsettling tendency.
Cynthia Murrell, October 14, 2013