De-Archiving: Where Is the Money to Deliver Digital Beef?

February 25, 2018

I read “De-Archiving: What Is It and Who’s Doing It?” I don’t want to dig into the logical weeds of the essay. Let’s look at one passage I highlighted.

As the cost of hot storage continues to drop, economics work in favor of taking more and more of their stored material and putting it online. Millions of physical documents, films, recordings, photographs, and historical data are being converted to online digital assets every year. Soon, anything that was worth saving will also be worth putting online. Tomorrow’s warehouse will be a data center filled with spinning disks that safely store any valuable data – even if it has to be converted to a digital format first. “De-archiving” will be a new vocab word for enterprises and individuals everywhere – and everyone will be doing it in the near future.

My hunch is that the thought leader who wrote the phrase “anything that was worth saving will be worth putting online” has not checked out the holdings of the Library of Congress. The American Memory project, on which I worked, represents a miniscule percentage of the non text information the LoC has. Toss in text, boxes of manuscripts, and artifacts (3D imaging and indexing). The amount of money required to convert and index the content might stretch the US budget which seems to wobble around with continuing resolutions.

Big ideas are great. Reality may not be as great. Movies which can disintegrate during conversion? Yeah, right. Easy. Economical.

Stephen E Arnold, February 25, 2018

Kentucky Technology: Stick with Horse Racing

February 21, 2018

Who knows if the information in “KFC: Enemy of Waistlines, AI, Arteries and Logistics Software” is steroid infused or faux chicken.

image

I loved the factoids in the write up for three reasons:

  1. I live in Harrod’s Creek, Kentucky, where fried squirrel is almost as popular as fried chicken with those herbs, spices, and what nots.
  2. Kentucky Fried Chicken has, according to local legend, had some squabbles in the software barnyard. Does Taco Bell system play nice with the fried chicken outlet systems? What about restaurant management software from folks in the even deeper South down Atlanta way?
  3. Kentucky Fried Chicken is famous in certain circles for the perfect celebratory feast. In Chicago, so the chatter goes, the buckets are a required food stuff after an event.

Here are the factoids I noted:

  • Self driving cars think the KFC logo is a stop sign. (I watched the Yandex self driving car video and it sailed right by what appeared to be fast food joints. If I spot a Yandex car braking for a bucket, I will pass along the information.)
  • A software glitch nuked some important information.
  • A shift to DHL from an outfit called Bidvest created a chicken shortage in some UK KFC outlets. No chicken? How does one fix this? Hit the Waitrose? Nah, shut the friend chicken shops.

Are you hungry for a two piece meal with the mandatory biscuit? Tip: Don’t tell the human at the counter to skip the biscuit. You have to wait if none are sitting on the ready line. Don’t like it? Hmmm.

Stephen E Arnold, February 21, 2018

The Next Stage in Information Warfare: Quantum Weaponization

February 20, 2018

We have been tracking the emergence of peer to peer technologies. Innovators have been working to deal with the emergence of next generation mainframe computing architectures like those available from Amazon, Facebook, and Google. The idea is that these new mainframes have popped up a level and are using software to integrate individual computing devices into larger constructs which are command and control systems.

Examples of the innovations can be found in the digital currency sector with the emergence of IOTA like systems. There are other innovation nodes as well; for example, discussed in online publications like Medium, technical fora, and implemented by outfits like Anonymous Portugal.

One of the popular methods used by my former colleagues at Halliburton Nuclear Utility Services was to look at a particular problem. The nuclear engineers would then try to fit the problem into a meta-schema. The idea was that a particular problem in some nuclear applications could not be tackled directly. A nuclear engineer tried to find ways to address the problem without poking the specific issue because once probed, the problem morphed. Hence, the meta-method was more useful.

Here’s a diagram which I think shows one facet of the approach:

See the source image

The idea is to come at a problem in different way. Edward de Bono called it “lateral thinking.” For me, the idea is to pop outside a problem, not in two dimensions, but three or four if time plays a part. Maybe “meta-thining” or “meta-analysis”?

What’s ahead for “the Internet” is what I conceptualize as urban warfare in the online world.

Non-traditional approaches to security, messaging, and data routing will combine to create a computing environment that’s different. Smart software will allow nodes or devices to make local decisions, and then that same smart software will use random message pathways to accomplish a task like routing. The difference between today’s concentrated Internet will be similar to Caesar’s Third Legion engaging in urban warfare. Caesar’s troops have swords; the urban fighters have modern weapons. Not even mighty Caesar can deal with the mismatch in technology.

Several observations:

  1. More robust encryption methods will make timely sense making of intercepted data very, very difficult
  2. Smart software will create polymorphic solutions to what are today difficult problems
  3. The diffusion of intelligent computing devices (including light bulbs) generate data volumes which will be difficult to process for meaningful signals by components not embedded in the polymorphic fabric. (Yes, this means law enforcement and intelligence entities).
  4. The era of the “old” Internet is ending, but the shift is underway. The movement is from a pointed stick to a cellular structure filled with adaptable proteins. The granularity and the “intelligence” of the tiny bits will be fascinating to observe.

In one sense, the uncertainty inherent in many phenomena will migrate into online.

The shift is not inherently “bad.” New opportunities will arise. The shift will have significant impacts, however. Just as the constructs of the industrial age have been reshaped by the “old” Internet, the new polymorphic, quantum-ized Internet will usher in some interesting changes.

Is digital Ebola replicating now, gentle reader?

Stephen E Arnold, February 20, 2018

Winter Olympics Opening: Was It a Demo?

February 11, 2018

I love digital technology. I even have a computer with video editing software. It seems that other folks follow my lead. Many are younger than I. I know this because the opening ceremony drone extravaganza was a demo.

I read “Drones Grounded at Opening Ceremony — But Not on Tape Delay.” I assume the write up is accurate, although even “real” news outfits have issues with “fake news.”

The line between reality and post production seems blurry. Does it matter? Not to advertisers as long as they get eyeballs. And Intel? Well, at least the post production drone show works unlike some of the firm’s technology.

Stephen E Arnold, February 11, 2018

Google Translate Gets a Needs Improvement on Its Translation System

February 5, 2018

I read “The Shallowness of Google.” The critique is not from a trendy start up in Silicon Valley or an academic who flopped in a Google interview. The analysis is by Douglas Hofstadter. if the name does not ring a bell, this is the fellow who wrote Gödel, Escher, Bach, a quite fun read.

The main point of the write up is that Google’s implementation of its artificial intelligence and machine learning technology for Google Translate is bad.

Image result for alpha sled dog

Google wants to be perceived as the alpha dog in smart software. Do you want to take this canine’s kibble? Google can bite even thought it may not get the whole “idea” and “understanding” behind a reprimand.

Mr. Hofstadter writes:

Having ever more “big data” won’t bring you any closer to understanding, since understanding involves having ideas, and lack of ideas is the root of all the problems for machine translation today. So I would venture that bigger databases—even vastly bigger ones—won’t turn the trick.

The idea is that “understanding” is not baked into Google Translate. In addition to providing examples of screwing up translations from French, German, and Chinese, Google Translate does not look up information in Google Search. Mr. Hofstadter does.

He points out:

Google Translate can’t understand web pages, although it can translate them in the twinkling of an eye.

He correctly observes:

As long as the text in language B is somewhat comprehensible, many people feel perfectly satisfied with the end product. If they can “get the basic idea” of a passage in a language they don’t know, they’re happy.

Mr. Hofstadter touches upon two issues, which another informed critic might convert to a write up in the Atlantic:

  1. Google is simply delivering “good enough” services. The object is advertising, not outputting on point products and services for a tiny fraction of its user base
  2. Google’s hype about its smart software is only slightly less off-the-wall than the marketing of IBM Watson. The drum beat for smart software is necessary to attract young programmers who might otherwise defect to Amazon or other Google competitors and to further the illusion that Google’s technology is magical, maybe otherworldly and definitely the alpha dog in the machine learning Iditarod.

The write up is worth reading. However, I would not run it through Google Translate if you prefer to ingest the article in one of Google Translate’s supported languages.

And for a person going through the Google interview process, it is not a plus to suggest that Google’s technology might be little more than a C or possible an F. Rah rah is a better choice.

That’s why we love Google Translate here in Harrod’s Creek, but we have switched to Free Translations.org since Google implemented a word limit.

Stephen E Arnold, February 5, 2018

DarkCyber for January 30, 2018, Now Available

January 30, 2018

DarkCyber for January 30, 2018, is now available at www.arnoldit.com/wordpress and on Vimeo at www.vimeo.com at https://vimeo.com/253109084.

This week’s program looks at the 4iq discovery of more than one billion user names and passwords. The collection ups the ante for stolen data. The Dark Web database contains a search system and a “how to” manual for bad actors. 4iq, a cyber intelligence specialist, used its next-generation system to locate and analyze the database.

Stephen E Arnold said:

“The technology powering 4iq combines sophisticated data acquisition with intelligent analytics. What makes 4iq’s approach interesting is that the company integrates trained intelligence analysts in its next-generation approach. The discovery of the user credentials underscores the importance of 4iq’s method and the rapidly rising stakes in online access.”

DarkCyber discusses “reputation scores” for Dark Web contraband sites. The systems emulate the functionality of Amazon and eBay-style vendor report cards.

Researchers in Germany have demonstrated one way to compromise WhatsApp secure group chat sessions. With chat and alternative communication channels becoming more useful to bad actors than Dark Web forums and Web sites, law enforcement and intelligence professionals seek ways to gather evidence.

DarkCyber points to a series of Dark Web reviews. The sites can be difficult to locate using Dark Web search systems and postings on pastesites. One of the identified Dark Web sites makes use of a hosting service in Ukraine.

About DarkCyber

DarkCyber is one of the few video news programs which presents information about the Dark Web and lesser known Internet services. The information in the program comes from research conducted for the second edition of “Dark Web Notebook” and from the information published in Beyond Search, a free Web log focused on search and online services. The blog is now in its 10th year of publication, and the backfile consists of more than 15,000 stories.

 

Kenny Toth, January 30, 2018

Google News Says Goodbye to Russian Propaganda

January 23, 2018

The United States is still reeling from possible Russian interference in the 2016 presidential election.  Every other day has some headline associated with the Trump Administration’s ties with the great bear, but what they still remain unclear.  However, one cold, hard fact is that Russia did influence online news outlets and media companies are taking steps to guarantee it does not happen again.  Motherboard reports that “Eric Schmidt Says Google News Will ‘Engineer’ Russian Propaganda Out Of News Feed.”

Alphabet Executive Chairman Eric Schmidt has faced criticism that Google News still displays Russian Web sites in news feeds.  In response, Schmidt responded that his company is well aware of the problem and have a plan to ferret out Russian propaganda. The top two Russian news outlets that are featured in Google News are Sputnik and RT.  Both Sputnik and RT are owned by the Russian government and have ceaselessly argued their legitimacy.  Their “legitimacy” allows them to benefit from Google AdSense.

Despite the false legitimacy, Schmidt said Alphabet is aware of Russia’s plans to influence western politics:

Schmidt said the Russian strategy is fairly transparent, and usually involves ‘amplification around a message.’ That information can be “repetitive, exploitative, false, [or] likely to have been weaponized,’ he said.  ‘My own view is that these patterns can be detected, and that they can be taken down or deprioritized.’

The problem is that Alphabet has not really outlined their plans to deter Russian influence.  Russian propaganda in the news bears some similarities to the Watergate Scandal during the Nixon Administration.  We have yet to see the long-term aftermath, but it peeks our curiosity about how it will affect the United States in years to come.

Whitney Grace, January 23, 2018

Big Data and Predictive Math: Some Doubters

January 19, 2018

I love Big Data. I love fancy math. I spotted two articles this morning which offer a contrarian view about two popular buzzwords: Big Data and Predictive Analytics.

The first write up is from the capitalist’s tool, Forbes Magazine. I can not tell what’s an ad or what’s a “real” journalistic endeavor. But in today’s world? Maybe the distinction is like arguing with St. Thomas Aquinas about the cause of evil.

Forbes’ story is “Big Data Is Overrated Compared To Human Ingenuity.” The main point is that humans with intelligence are more ingenious than software. No software, as far as I can tell, was consulted when formulating the thesis. The main point for me was:

an algorithm may be able to cover sports, you cannot clone or generate whimsy or humor or the essence of what makes writing enjoyable to read. We are not (at least not yet) at a point where computers are able to have full conversations, let alone exude the creativity to come up with ideas. The creative geniuses of the future may, in fact, be aided by big data, but they will simply use it (as one would use Google to search the giant database known as the internet) to ask the right questions to solve the world’s problems.

My thought is, “What about robot wars?” Does that TV show presage the NFL of the future?

The second write up is from a British online publication. The article’s title is “Software That Predicts Whether Crims Will Break the Law Again Is No Better Than You or Me.”

The main idea strikes me as:

…if you took someone with no legal, psychological or criminal justice system training – perhaps you, dear reader – and showed them a few bits of information about a given defendant, they’d be able to guess as well as this software as to whether the criminal would break the law again.

Interesting point; however, software might be able to chop through a backlog of cases, thus reducing costs. Sure a few good apples will be tossed into the for profit prisons, but that’s just a statistical error.

What I find amusing is the point made by a TV pundit in “How to Stop ‘Extremely Disruptive’ AI from Harming Society: Robert Shiller.” I don’t know about you but knowing unintended consequences before they occur might be difficult. Facebook has been around for years, and people are just now figuring out that the system can do more than help grandmother keep track of the grandchildren.

Exciting stuff. Predictive law enforcement is important. Big Data are getting bigger and being used to sell ads to people who don’t recognize the message as an ad. Regulating technology is like standing on the pier after the Queen Mary set sail and shouting, “Hey, come back.”

Stephen E Arnold, January 19, 2018

Neural Net Machine Translation May Increase Acceptance by Human Translators

January 2, 2018

Apparently, not all professional translators are fond of machine translation technology, with many feeling that it just gets in their way. A post from Trusted Translations’ blog examines, “Rage Against the Machine Translation: What’s All the Fuzz About?” Writer Cesarm thinks the big developers of MT tech, like Google and Amazon, have a blind spot—the emotional impact on all the humans involved in the process. From clients to linguists to end users, each has a stake in the results. Especially the linguists, who, after all, could theoretically lose their jobs altogether to the technology. We’re told, however, that (unspecified) studies indicate translators are more comfortable with software that incorporates neural networking/ deep learning technology. I seem such tools produce a better linguistic flow, even if some accuracy is sacrificed. Cesarm writes:

That’s why I mention emotional investment in machine translation as a key element to reinventing the concept for users.  Understanding the latest changes that have been implemented in the process can help MT-using linguists get over their fears. It seems the classic, more standardized way of MT, (based solely on statistical comparison rather than artificial intelligence) is much better perceived by heavy users, considering the latter to be more efficient and easier to ‘fix’ whenever a Post-Editing task is being conducted, while Post Editing pre-translated text, with more classical technology has proven to be much more problematic, erratic, and what has probably nurtured the anger against MT in the first place, giving it a bad name. Most users (if not all of them) will take on pre-translated material processed with statistical MT rather that rule based MT any day. It seems Neural MT could be the best tool to bridge the way to an increased degree of acceptance by heavy users.

Perhaps. I suppose we will see whether linguists’ prejudice against MT technology ultimately hinders the process.

Cynthia Murrell, January 2, 2018

Humans Living Longer but Life Quality Suffers

December 28, 2017

Here is an article that offers some thoughts worth pondering.  The Daily Herald published, “Study: Americans Are Retiring Later, Dying Sooner And Sicker In Between”.  It takes a look at how Americans are forced to retire at later ages than their parents because the retirement age keeps getting pushed up.  Since retirement is being put off, it allows people to ideally store away more finances for their eventual retirement.  The problem, however, is that retirees are not able to enjoy themselves in their golden years, instead, they are forced to continue working in some capacity or deal with health problems.

Despite being one of the world’s richest countries and having some of the best healthcare, Americans’ health has deteriorated in the past decade.  Here are some neighbors to make you cringe:

University of Michigan economists HwaJung Choi and Robert Schoeni used survey data to compare middle-age Americans’ health. A key measure is whether people have trouble with an “activity of daily living,” or ADL, such as walking across a room, dressing and bathing themselves, eating, or getting in or out of bed. The study showed the number of middle-age Americans with ADL limitations has jumped: 12.5 percent of Americans at the current retirement age of 66 had an ADL limitation in their late 50s, up from 8.8 percent for people with a retirement age of 65.

Also, Americans’ brains are rotting with an 11 percent increase in dementia and other cognitive declines in people from 58-60 years old.  Researchers are not quite sure what is causing the decline in health, but they, of course, have a lot of speculation.  These include alcohol abuse, suicide, drug overdoses, and, the current favorite, increased obesity.

The real answer is multiple factors, such as genes, lifestyle, stress, environment, and diet.  All of these things come into play.  Despite poor health quality, we can count on more medical technological advances in the future.  The aging population maybe the test grounds and improve the golden years of their grandchildren.

Whitney Grace, December 28, 2017

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta