A Plea for Bing: Use It

September 14, 2019

Microsoft wants more people to use Bing and Microsoft wants them to use it now! Microsoft is desperate for more Bing users that they their trademarked search engine into the new Windows 10 update. Read the story at Win Buzzer, “Microsoft Builds Bing Search into Windows 10 20H1 Lock Screen.”

The Bing implementation is touted as a new search featured imbedded in the Windows lock screen, The feature was released with the new Windows 10 20H1 Preview Build 18932, but it remains hidden and can only be accessed with a tool. One tool is the Mach2. The integration of Bing into the lock screen is good design. The idea is giving users the option to conduct an Internet search without having to unlock their entire PC. It is for those, “Oh yeah, I need to look that up” moments. It is not stated where results will appear. If they are on the lock screen, it is a genius move, but if the results are only available by unlocking the PC it is stupid.

Since Microsoft placed Bing on the Start menu, it gets as much as 50% of its traffic through that direct link as the official Bing Web site. This is funny:

“At the moment, we just can’t see how the Bing feature on the lock screen would be useful. Of course, Microsoft may have some wider lock screen plans that we don’t know about yet.Whether this is Microsoft making a play to compete further with Google is unclear, but it probably won’t work. Bing is the default search tool on Windows PCs, but users continue to actively choose Google Search over it. Adding Bing to the lock screen will likely not change that. However, it will be interesting to see how Microsoft handles this new feature in the coming months.”

Apparently the author Luke Jones never has to figure out the name of that actor in that one movie or the name of that place where he ate lunch three weeks ago next to the good bakery. Ah, Luke Jones may want to consult a librarian.

Whitney Grace, MLS, September 14, 2019

Mobile Phone Privacy?

September 13, 2019

Mobile devices are supposed to contain the best, reliable technology at the hands of an individual’s fingertips. Along with this great technology, we believe that our privacy and information are protected. The reason being is that we shell out huge amounts for the technology, pay a monthly bill, and expect the security to match the investment. Hackaday explains that is not the truth with the newest 5G technology in the article, “5G Cellphones Location Privacy Broken Before It’s Even Implemented.”

Our location information is one of the top things that is supposed to be secure on mobile devices, but the Authentication and Key Agreement (AKA) protocol has been broken at the most basic level since 3G, 4G, and 5G inceptions. What? Once upon a time when 3G was the latest craze, it was expensive to spoof cell phone towers and so difficult that that a device’s International Mobile Subscriber Identity (IMSI) was transmitted unencrypted. The new 5G does have a more secure version with asymmetric encryption and a challenge response protocol with sequential numbers to prevent replay attacks. However, there is a way to override this:

“This hack against the AKA protocol sidesteps the IMSI, which remains encrypted and secure under 5G, and tracks you using the SQN. The vulnerability exploits the AKA’s use of XOR to learn something about the SQN by repeating a challenge. Since the SQNs increment by one each time you use the phone, the authors can assume that if they see an SQN higher than a previous one by a reasonable number when you re-attach to their rogue cell tower, that it’s the same phone again. Since the SQNs are 48-bit numbers, their guess is very likely to be correct. What’s more, the difference in the SQN will reveal something about your phone usage while you’re away from the evil cell.”

Perhaps burner phones are a possible solution to some alleged 5G privacy issues?

Whitney Grace, September 13, 2019

The Secret AI Sauce: Blending Recipes

September 13, 2019

What is next for AI? According to PC Magazine, a union of sorts. Their headline declares, “The AI Breakthrough Will Require Researchers Burying Their Hatchets.” Though the piece may overstate the “rivalry” between rule-based AI (symbolism) and neural networks (connectionism), it presents an interesting perspective. Writer Ben Dickson begins with a little background—Symbolic AI was the way to go until 2012, when a breakthrough at the University of Toronto made neural-network AIs much more practical. Since then, he asserts, the field has been all abuzz about that approach, leaving symbolism in the dust. Now, though, Dickson writes:

“Seven years into the deep-learning revolution, we’ve seen that deep learning is not a perfect solution and has distinct weaknesses that limit its applications. One group of researchers at MIT and IBM believe the next breakthrough in AI might come from putting an end to the rivalry between symbolic AI and neural networks. In a paper presented at the International Conference on Learning Representations (ICLR) earlier this month, these researchers presented a concept called Neuro-Symbolic Concept Learner, which brings symbolic AI and neural networks together. This hybrid approach can create AI that is more flexible than the traditional models and can solve problems that neither symbolic AI nor neural networks can solve on their own.”

The article delves a bit into the limitations of deep learning and how a return to some symbolic AI tools can help, so navigate to the write-up for those details. Dickson presents this example on combining the two approaches:

“The MIT and IBM researchers used the Neuro-Symbolic Concept Learner (NSCL) to solve VQA [visual question-answering] problems. The NSCL uses neural networks to process the image in the VQA problem and then to transform it into a tabular representation of the objects it contains. Next, it uses another neural network to parse the question and transform it into a symbolic AI program that can run on the table of information produced in the previous step.”

We see the logic here. Researchers tested NSCL on an image dataset called CLEVR and achieved 99.8 percent accuracy with much less data than required to train a stand-alone neural network to do the same things. IBM’s David Cox reports that incorporating symbolism also makes it much easier to see what the AI is doing under the hood. Though, as Dickson points out, voices on each side have spoken out against the other, the way forward may be to tap into the strengths of each approach. Seems logical.

Cynthia Murrell, September 13, 2019

Questionable Journals Fake Legitimacy

September 13, 2019

The problem of shoddy or fraudulent research being published as quality work continues to grow, and it is becoming harder to tell the good from the bad. Research Stash describes “How Fake Scientific Journals Are Bypassing Detection Filters.” In recent years, regulators and the media have insisted scientific journals follow certain standards. Instead of complying, however, some of these “predatory” journals have made changes that just make them look like they have mended their ways. The write-up cites a study out of the Gandhinagar Institute of Technology in India performed by Naman Jain, a student of Professor Mayank Singh. Writer Dinesh C Sharma reports:

“The researchers took a set of journals published by Omics, which has been accused of publishing predatory journals, with those published by BMC Publishing Group. Both publish hundreds of open access journals across several disciplines. Using data-driven analysis, researchers compared parameters like impact factors, journal name, indexing in digital directories, contact information, submission process, editorial boards, gender, and geographical data, editor-author commonality, etc. Analysis of this data and comparison between the two publishers showed that Omics is slowly evolving. Of the 35 criteria listed in the Beall’s list and which could be verified using the information available online, 22 criteria are common between Omics and BMC. Five criteria are satisfied by both the publishers, while 13 are satisfied by Omics but not by BMC. The predatory publishers are changing some of their processes. For example, Omics has started its online submission portal similar to well-known publishers. Earlier, it used to accept manuscripts through email. Omics dodges most of the Beall’s criteria to emerge as a reputed publisher.”

Jain suggests we update the criteria for identifying quality research and use more data analytics to identify false and misleading articles. He offers his findings as a starting point, and we are told he plans to present his research at a conference in November.

Cynthia Murrell, September 13, 2019

Google Maps: Complex and Tricky for Some Users

September 12, 2019

Google Maps has become the one stop map tool due to its reliability, ease of use, accuracy, and wealth of information. The map app, however, is not as accurate as you think says Media Street in the article, “You Can’t Trust Google Maps To Find It All-Fake Businesses Are Everywhere.” The Wall Street Journal discovered that nearly eleven million businesses listed on Google Maps are fake. Other companies create the listings to boost their own business info ahead of the competition and others are scams.

In 2018, Google removed more than three million fake listings and more than 90% were removed before a user saw them. Users reported 250,000 fake profiles, while Google’s own system flagged 85% of the removals. Google encourages users to report anything suspicious or appears fraudulent.

Google does its best to track down the fake businesses:

“Google typically verifies if a business is legit by calling, mailing a postcard, or emailing a numerical code that is then entered on the website. It’s a pretty easy process for savvy scammers who likely use fake addresses and businesses for their listings anyway. Knowing this, the company says that they are constantly developing new ways to weed out fake listings, but can’t elaborate on what they are due to the sensitive nature.

Every month Maps is used by more than a billion people around the world, and every day we and our users work as a community to improve the map for each other,’ Google Maps’ product director, Ethan Russell, wrote in the blog post. ‘We know that a small minority will continue trying to scam others, so there will always be work to do and we’re committed to keep doing better.’”

There are ways to be wise to scams. You can avoid businesses that have names that included “dependable” or “emergency,” screen your phone calls, do not trust all the reviews, and also do your own research. See if the business has a Web site, check other review sites, view social media accounts, etc. Never forget to trust your gut instinct either.

Whitney Grace, September 12, 2019

Machine Learning Created A Big Data Problem And Only Machine Learning Can Fix It

September 12, 2019

Companies heavily invest in machine learning algorithms, but they soon learn that the algorithms are not magic and do not deliver the desired business insights. Data scientists are then employed to handle junk data and “fix the problem,” but they hardly get to use their skills appropriately. The bigger problem, said Silicon Angle’s article, “The Real Big-Data Problem And Only Machine Learning Can Fix It” is that businesses do not employee machine leaning algorithms from the onset. Instead they concentrate on the end result and data quantity over quality, most of which is useless.

Tamr Inc. CEO Andy Palmer and its chief technology officer Michael Stonebraker believe that smaller startups offer more scalable big-data solutions for companies than the legacy companies. Tamr Inc. assists companies to use machine learning to unify their data silos. Palmer and Stonebraker have worked for years to share the truth about big data. It is better to use machine learning for the menial labor, so that the data can be cleaned and organized before it’s analyzed, marketed, or anything is sold with it.

Becoming entirely machine learning is another problem, but it has more to do with a company’s culture than anything else:

“Machine learning isn’t a silver bullet, Stonebraker conceded. Becoming truly data-driven requires both technological and cultural adjustments. In fact, 77% of surveyed executives said business adoption of big data/AI initiatives is difficult for their organizations, according to a NewVantage Partners LLC study. That’s up from last year despite plenty of new software flooding the market. These executives cited a number of obstacles holding back adoption, 95% of which were cultural or organizational, rather than technological. ‘Organizations … need a plan to get to production. Most don’t plan and treat big data as technology retail therapy,’ Gartner Inc. analyst Nick Heudecker has said.”

The culture is one reason why data scientists are forced to spend much of their time sifting and sorting the data. It also means replacing humans with machine learning. Will organizations have the knowledge to make this type of shift in an informed manner?

Whitney Grace, September 12, 2019

Handy Visual Reference of Data Model Evaluation Techniques

September 12, 2019

There are many ways to evaluate one’s data models, and Data Science Central presents an extensive yet succinct reference in visual form—“Model Evaluation Techniques in One Picture.” Together, the image and links make for a useful resource. Creator Stephanie Glen writes:

“The sheer number of model evaluation techniques available to assess how good your model is can be completely overwhelming. As well as the oft-used confidence intervals, confusion matrix and cross validation, there are dozens more that you could use for specific situations, including McNemar’s test, Cochran’s Q, Multiple Hypothesis testing and many more. This one picture whittles down that list to a dozen or so of the most popular. You’ll find links to articles explaining the specific tests and procedures below the image.”

Glen may be underselling her list of links after the graphic; it would be worth navigating to her post for that alone. The visual, though, elegantly simplifies a complex topic. It is divided into these subtopics: general tests and tools; regression; classification: visual aids; and Classification: statistics and tools. Interested readers should check it out; you might just decide to bookmark it for future reference, too.

Cynthia Murrell, September 12, 2019

Elastic Stack Goes Into Cyber Security

September 11, 2019

The open source search company Elasticsearch has augmented its offerings with new security technology. ZDNet delves into Elasticsearch’s new endeavor in the article, “Elastic Takes the First Steps Toward Building Out Its SIEM Solution.” Elastic Stack is Elasticsearch’s open source analytics tool and it received a new update: Elastic NV. Elastic NV is a data model and UI for Security information and Event Management (SIEM).

Elasticsearch has a lot of competition, so the company decided that making its log, search, and analytics stack more utilitarian would expand its client base. The SIEM update is an appealing security solution:

“The SIEM features lay the foundations for a more fleshed-out solution going forward with the new Elastic Common Schema, an open source specification for field naming conventions and data types; think of the new common schema as a Rosetta Stone for the different types of logs, metrics, and other contextual data that is used for analyzing security events. Additionally, the 7.2 release adds a dedicated user interface for security events, featuring a timeline viewer to store evidence of an attack, pin and annotate relevant events, and provide query filtering capabilities.”

While appealing the Elastic SIEM offerings are still skeletal, but Elastic acquired Endgame-a company that designs endpoint security solutions. Elastic will probably include it in a future SIEM update.

Search is also more powerful in Elastic NV. Search used to be limited to the Elastic cloud, but it can now be used on-site end systems. Elastic is extending its services also to make a scalable search-based solution to provide insights into detecting potential threats.

Will other enterprise search vendors follow Elastic?

Whitney Grace, September 11, 2019

YouTube and Copyright: Changes Made

September 11, 2019

Finally YouTube Changes Its Horrible Copyright System

YouTubers love and hate their platform of choice. They love that they have the freedom to make videos, but they hate YouTube’s unfair copyright infringement system. If you are unfamiliar with YouTube’s copyright infringement system, then read Gizmodo’s article, “YouTube Announces Some Changes To Its Infamously Awful Copyright Infringement System.”

The opening paragraph says it all:

“The number of issues plaguing YouTube at any one time boggles the mind, and range from accusations it promotes extremist content to reports its nightmare algorithm recommended home videos of children to the pedophiles infesting its comments sections. One of the less overtly alarming but still widespread issues has been the shoddy state of its copyright infringement claims system, which report after report have repeatedly indicated is trivially abused to file false claims, extort creators, and generally make YouTubers’ lives hell.”

YouTube CEO Susan Wojcicki announced in July 2019 that there would be numerous changers to the copyright claim system. The copyright claim system is different from the copyright infringement system, because the former is manual. Anyone who files a claim through the copyright claim system will need to input exact timestamps of the violation, instead of flagging an entire video.

Before YouTubers were not told how one of their videos violated a copyright claim. The new timestamp system will highlight the video’s section that is under scrutiny. YouTube will also promote more of its tools to make a video copyright compliant, such as muting sound or deleting a segment. These tools were available before, but YouTubers were unaware of where in their videos the problem was.

Problems still exist for content creators using copyrighted material for reviews, education, research, or news. Many YouTubers who make these types of videos claim their content falls under fair use guidelines.

Maybe the suffering of some YouTubers will lessen. Maybe.

Whitney Grace, September 11, 2019

A Look at Snovio for Drip Campaigns

September 11, 2019

Marketing strategies evolve with the technologies available, and right now drip campaigns are popular. This approach involves sending pre-written messages to leads or existing customers over time. Naturally, automation makes drizzling such series into many inboxes a snap. Several apps exist to do just that; The Tech Block describes one of them in its write-up, “Snov.io Email Drip Campaigns: The Perfect Tool to Reach Out to More People.” We learn:

“Snov.io is a platform that gives marketers an opportunity to search for leads and their email addresses, check if they are real, send auto follow-ups, and some other useful features. You can use the email finder as an extension for Google Chrome and as a web application. It will help you find emails from various websites and even LinkedIn. With the help of the email checker, you will verify all the email addresses that you have found. This way you will be able to create an entire email contact list of potential prospects and send drip emails to them via the Email Drip Campaigns tool. The Gmail Tracker traces the sent emails and provides the details on email opens and link clicks.”

Snovio boasts professional message templates that help companies put their best foot forward, and promises “complete” automation. See the write-up, or Snovio’s website, for more details; don’t forget to check out some of the competition as well. Founded in 2017, Snovio is based in New York City.

Cynthia Murrell, September 11, 2019

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta