Web Search with Privacy: SearX

August 24, 2018

For far too long we have been living in the Wild West of search: there are too few rules and personal data has been far too fluid. While we wait for the Googles of the world to change their policies (fat chance!) the time has come to find alternatives for those of us who care about keeping their privacy a top priority. We learned more about this revolution from a Make Use Of story, “Avoid Google and Bing: 7 Alternative Search Engines That Value Privacy.”

According to the story:

“Functionally, SearX is a metasearch engiyne, meaning it aggregates data from a number of other search engines then provides you with the best mix available. Results from several of the other search engines on this list—including DuckDuckGo, Qwant, and StartPage—are available. You can customize the engines that SearX uses to find results in the Preferences menu.”

Is a new search engine the answer? Probably not likely. In another time, we might point to the idea that the world has room for more search engines, but with the rise of voice search and the amount of money needed to research this type of thing, the odds of a new search engine taking over for Google or the like is very much impossible. There are other privacy centric Web search systems; for example, Unbubble.

The question becomes, “Are these systems private, or are the data available to authorities with the proper documentation?” Marketing is different from privacy for some people.

Patrick Roland, August 24, 2018

Alexa Is Still Taking Language Lessons

August 24, 2018

Though Amazon has been aware of the problem for a while, Alexa still responds better to people who sound like those she grew up with than she does to others. It is a problem many of us can relate to, but one the company really needs to solve as it continues to deploy its voice-activated digital assistant worldwide.   TheNextWeb cites a recent Washington Post study as it reports, “Alexa Needs Better Training to Understand Non-American Accents.” It is worth noting it is not just foreign accents the software cannot recognize—the device has trouble with many regional dialects within the US, as well.

“The team had more than 100 people from nearly 20 US cities dictate thousands of voice commands to Alexa. From the exercise, it found that Amazon’s Alexa-based voice-activated speaker was 30 percent less likely to comprehend commands issued by people with non-American accents. The Washington Post also reported that people with Spanish as their first language were understood 6 percent less often than people who grew up around California or Washington and spoke English as a first language.Amazon officials also admitted to The Washington Post that grasping non-American accents poses a major challenge both in keeping current Amazon Echo users satisfied, and expanding sales of their devices worldwide. Rachael Tatman, a Kaggle data scientist with expertise in speech recognition, told The Washington Post that this was evidence of bias in the training provided to voice recognition systems.‘These systems are going to work best for white, highly educated, upper-middle-class Americans, probably from the West Coast, because that’s the group that’s had access to the technology from the very beginning,’ she said.”

Yes, the bias we find here is the natural result of working with what you have where you are, and perhaps Amazon can be forgiven for not foreseeing the problem from the beginning. Perhaps. The article grants that the company has been working toward a resolution, and references their efforts to prepare for the Indian market as an example. It seems to be slow going.

Cynthia Murrell, August 24, 2018

Social Media: It Is Wiggling to Stay Off the Barbed Hook

August 23, 2018

A battle has been raging in regards to the plague of fake news running amok on our screens. Many experts think it is the responsibility of the companies to regulate its content to help curb false information. However, a recent study found in ZD Net, “Can Regulating Twitter and Facebook Stop the Spread of Fake News?” thinks otherwise.

According to the report:

The committee rejects the idea that Facebook, Twitter, Google etc are merely “platforms” who are not responsible for their content.

We noted:

“The report said that social media companies cannot hide behind the claim of being merely a ‘platform’, and by claiming that they are tech companies and have no role themselves in regulating the content of their sites.”

Undoubtedly, the idea that they are merely tech companies with no content responsibilities will be used to avoid taking action. This, despite outcries from citizens and governments, for them to do something. It will, obviously, be cheaper to keep these platforms at the status quo (if that’s what they choose to do) but they may lose customers in the long run if trust erodes. We suspect this will be a splintering moment for social media giants who chose to handle this issue differently.

Just letting the “platform” run has produced interesting consequences: Filtering, human editors, definitions of acceptable, and so on.

Patrick Roland, August 2, 2018

AI Poised to Take Over Writing in Surprising Ways

August 23, 2018

Long ago, it was writers who told us technology would steal our jobs. In a fit of irony no novelist could resist, the time has come and it might just be snatching up the writer’s jobs. We discovered this in a recent BGR story, “Scientists Trained an AI to Write Poetry, Now It’s Going Toe-To-Toe With Shakespeare.”

According to the story:

“The AI was trained extensively on the rules it needed to follow to craft an acceptable poem. It was fed nearly 3,000 sonnets as training, and the algorithm tore them apart to teach itself how the words worked with each other. Once the bot was brought up to speed it was tasked with crafting some poems of its own.”

The write gives an example of the computer’s work and it’s surprisingly solid. However, many experts are saying this isn’t the end of creativity. As pointed out by Scientific American, just because a computer creates something that looks like art does not mean it is actually art. That’s because people overlook the need for human expression as an outlet—something AI doesn’t have.

Let a software system create facts. Sounds like a plan.

Patrick Roland, August 23, 2018

New Technology: A Problem Solver or Problem Generator?

August 23, 2018

Noodling about the k-epsilon model is probably not popular at most cocktail parties. In brief when flows occur, chaos usually turns up at the party.

Consider a large company anchored in technology from IBM, SAP, and some home brew applications. Toss in a mainframe, an AS/400 legacy system, and run-of-the-mill desktops.

Technological change is difficult, and when a switch is needed from Windows 3.11 to Windows 95, the shift may take years. The mainframe keeps on chugging along with CICS, MVS TSO, and green screens. The SAP system gets updated, but after a three year install process, who wants too make changes.

Today, the world of enterprise computing is different. Even the US government wants to move to the cloud. Virtualization is the big thing, not hardware down the hall behind a keycarded door.

When I read “Fragmenting Budgets and Rapid Pace of Change Creates Perfect Storm for IT Decision Makers.” The write up explains a situation which I thought most computer centric folks knew and understood.

The write up explains:

IT decision-makers are increasingly tasked with the difficult decision of choosing technology within business operations and finding the correct IT solutions for business needs.  This extra link in the chain combined with the ever-accelerating pace of technological development is creating a perfect storm. In fact, a recent survey of IT decision-makers found that more than half are struggling to keep up with the pace of new technology. Most (84 per cent), acknowledge that they are not currently running the most optimum IT systems and significantly, 28 per cent admit that their organization has actually fallen behind the rate of technological change.

Nothing is as compelling as fear in an organization.

What’s happening is that the friction brakes of old school systems and methods are being replaced with the equivalent of dragging a sneaker on the pavement to slow down a bicycle. For some young at heart managers, the sneaker brake is great fun.

The downside is what I call a chaos problem. Semi stable flows become chaotic or, in more colloquial language, pretty darned crazy.

turbulence

IT managers now find themselves in a technology environment less stable than those that existed a scant 10 years ago. The decision to embrace fast changing innovations can be a significant. Not only will the competitiveness of the organization be affected but the work environment may no longer match what must be accomplished to remain a viable entity.

Examples include the well publicized engineer revolts at Facebook and Amazon. The technical waffling from chip vendors when flaws are discovered. The presence of point of sale units at fast good chains and grocery stores which employees cannot operate.

The write up documents an accelerating opportunity for consultants. For those crushed with the fragments from technical chaos, the future may require rehab.

Stephen E Arnold, August 23, 2018

Twitter Bans Accounts

August 22, 2018

i read “Facebook and Twitter Ban over 900 Accounts in Bid to Tackle Fake News.” Twitter was founded about 12 years ago. The company found itself in the midst of the 2016 election messaging flap. The article reports:

Facebook said it had identified and banned 652 accounts, groups and pages which were linked to Iran and to Russia for “co-ordinated inauthentic behavior”, including the sharing of political material.

One of the interesting items of information which surfaced when my team was doing the research for CyberOSINT and the Dark Web Notebook, both monographs designed for law enforcement and intelligence professionals, was the ease with which Twitter accounts can be obtained.

For a program we developed for a conference organizer in Washington, DC, in 2015, we illustrated Twitter messages with links to information designed to attract young men and women to movements which advocated some activities which broke US laws.

The challenge had in 2015 several dimensions. Let me run down the ones the other speakers and I mentioned; for example:

  • The ease with which an account could be created
  • The ease with which multiple accounts could be created
  • The ease with which messages could be generated with suitable index terms
  • The ease with which messages could be disseminated across multiple accounts via scripts
  • The lack of filtering to block weaponized content.

Back to the present.

Banning an account addresses one of these challenges.

The notion of low friction content dissemination, unrestricted indexing, and the ability to create accounts is one to ponder.

Killing an account or a group of accounts may not have the desired effect.

Compared to other social networks, Twitter has a strong following in certain socio economic sectors. That in itself adds a bit of spice to the sauce.

Stephen E Arnold, August 22, 2018

Smart Software and Old School Technology

August 22, 2018

It feels strange to say that anything analog is a trend in artificial intelligence, but that certainly seems to be the case in one segment. According to reports, there’s actually a way for AI to get faster and more accurate by indulging in some old timey thinking. We learned more from a recent Kurzweil article, “IBM Researchers Use Analog Memory to Train Deep Neural Networks Faster and More Efficiently.”

According to the story:

“IBM researchers used large arrays of non-volatile analog memory devices (which use continuously variable signals rather than binary 0s and 1s) to perform computations. Those arrays allowed the researchers to create, in hardware, the same scale and precision of AI calculations that are achieved by more energy-intensive systems in software, but running hundreds of times faster and at hundreds of times lower power…”

This is an intriguing development for AI and machine learning. Next Platform took a look at this news as well and found: “these efforts focused on integrating analog resistive-type electronic memories onto CMOS wafers, they also look at photonic-based devices and systems and how these might fit into the deep learning landscape.” We’re excited to see where this development goes and what companies will do with greater AI strength.

Patrick Roland, August 22, 2018

Fake Reviews, Not Just Fake News

August 22, 2018

When shopping online, one cannot closely examine a product for oneself, so it is tempting to rely on reviews attached to its description. NPR reports, “Some Amazon Reviews Are Too Good to Be Believed. They’re Paid For.” It is a problem that we’ve been aware of for some time, and reporter Ryan Kailath observes that networks have arisen around paid reviews, doing business through social media. There are even what one might call best practices. We learn:

As Amazon and its algorithms get better at hunting them down, paid reviewers employ their own evasive maneuvers. Travis, the teenage paid reviewer, explained his process. He’s a member of several online channels where Amazon sellers congregate, hawking Ethernet cables, flashlights, protein powder, fanny packs — any number of small items for which they want favorable reviews. If something catches Travis’ attention, he approaches the seller and they negotiate terms. Once he buys the product and leaves a five-star review, the seller will refund his purchase, often adding a few dollars ‘commission’ for his trouble. He says he earns around $200 a month this way. The sellers provide detailed instructions, to avoid being detected by Amazon’s algorithms, Travis says. For example, he says, ‘Order here at the Amazon link. Don’t clip any coupons or promo codes. [Wait 4 to 5 days] after receiving [the item].’ This last instruction is especially important, Travis adds. ‘If you review too soon after receiving it’ll look pretty suspicious.’”

Outside auditors estimate more than half the reviews for certain products are not to be trusted, though Amazon disputes that conclusion. Citing Mozilla Fellow on media, misinformation, and trust Renée DiResta, Kailath notes that investing in these reviews has been paying off for many companies. Many of these firms are Chinese, we’re told, operating through the Chinese site Alibaba. They seek to penetrate US markets by leveraging Amazon’s powerful reach. Ultimately, DiResta warns, the problem could hurt Amazon’s reputation, but the company can only do so much. Meanwhile, she suggests customers turn to third-party review sites, like CNET or Wirecutter, for example. Are these sites objective? Perhaps.

Cynthia Murrell, August 22, 2018

Factualities for August 22, 2018

August 22, 2018

Believe ‘em or not. This week’s factualities are:

  • 25,000. The number of  illegal gambling apps removed from Apple store due to the Chinese government. Source: Wall Street Journal with a pay wall at this link.
  • Museum puts sewage on display. Source: Ars Technica
  • 33, the number of clinical trial centric scientific papers published by a Japanese expert. How many identified as containing made up date? Just 23. Source: Science Magazine
  • Get paid to watch dirty movies. Yep, but you get a special crypto currency. Source: Metro Newspaper at this link
  • 500. The number of English speaking robots to be deployed in Japanese schools. Source: ZDNet at this link
  • Who is the leader, according to Forrester and IBM, in industrial Internet of Things platforms? IBM. Source: IBM at this link
  • The secret to managing millennials? Don’t assume they are millennials. Source: Inc. at this link

Ah, the modern world with mobiles and online.

Stephen E Arnold, August 23, 2018

The Social Vendor ATM: Governments Want to Withdraw Cash

August 21, 2018

I read “Social Networks to Be Fined for Hosting Terrorist Content.” My first reaction is, “Who is going to define terrorist content?” Without an answer swirling into my mind, I looked to the article for insight.

I learned:

,,, the EC’s going to follow through on threats to fine companies like Twitter, Facebook and YouTube for not deleting flagged content post-haste. The commission is still drawing up the details…

I assume that one of the details will be a definition of terrorist content.

How long will a large, mostly high school science club type company have to remove the identified content?

The answer:

One hour for platforms to delete terrorist content.

My experience, thought hardly representative, is that it is difficult to get much accomplished in one hour in my home office. A 60 minute turnaround time may be as challenging for a large outfit operating under the fluid principles of high school science club management.

Programmers sort of work in a combination of intense focus and general confusion. My hunch it may be difficult to saddle up the folks at a giant social vendor to comply with a take down request in 3,600 seconds.

My thought is that the one hour response time may be one way to get the social media ATM to eject cash.

By the way, some of Google’s deletion success can be viewed at this page on YouTube. Note that there are some interesting videos which are not deleted. One useful way to identify some interesting videos is to search for the word “nashid” or “nasheed.”

The results list seems to reveal at least one facet of terrorism’s definition.

Stephen E Arnold, August 21, 2018

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta