Snapchat wants to use image recognition to send ads, and it could be a little creepy

By | Filters, Image Ads, Image Recognition, Monetization, Research, Snapchat | No Comments
July 19, 2016

One of Snapchat’s best loved features is its photo filters, which use GPS data and augmented reality to add interactive “lenses” to your photos and videos. Now, the messaging startup wants to make that offering more powerful—and lucrative.

A patent application published on July 14, titled “Object Recognition Based Photo Filters,” describes lenses and filters that would be based on the picture you’re taking. For example, if you’re snapping a photo of the Empire State Building, you’d be given the option of a King Kong filter in which the ape climbs the building. The application also outlines how Snapchat could push you a free coffee offer after you post a photo of a hot cup of java.

Snapchat patent shows it'll recognize the Empire State Building, and push a King Kong filter to a user.
Snapchat patent shows it’ll recognize the Empire State Building, and push a King Kong filter to a user. (USPTO/Snapchat)
Snapchat would recognize your coffee in your photo, and send you an offer.
Snapchat would recognize your coffee in your photo, and send you an offer. (USPTO/Snapchat)

Snapchat has 150 million users who send 10 billion videos a day, and they’ve shown no resistance to using sponsored filters. One by Gatorade during this year’s Super Bowl generated 160 million impressions.

But the deep image recognition software needed for the capabilities described in the patent goes further than what’s been offered to date and could make users uncomfortable. Based on the application, Snapchat would be looking at what you’re sending, where you are, and send you advertisements based on that. Snapchat declined to comment on the application.

The tension between a user’s experience and building an advertising business has been a challenge faced by almost every social media company. Facebook and Twitter have had their ups and down, and so will Snapchat. The company is internally projecting sales of $250-$350 million in 2016, and between $500 million and $1 billion in 2017. Snapchat brought in just $59 million in 2015, according to TechCrunch.

Companies file patent applications that go unused all the time, and this patent has not yet been granted. But the bet is on whether or not consumers (especially young ones, like Snapchat’s core demographic) are willing to sacrifice their privacy for fun and potentially useful products. And, for Snapchat, the answer is the difference between being a hip trendy app and the next Facebook.

Google Launches Shop the Look to Optimize Advertising for Retailers

By | Algorithms, Fashion, Image Ads, Monetization | No Comments

Google Launches Shop the Look to Optimize Advertising for Retailers

Photo: Courtesy of Google
Photo: Courtesy of Google

Advertising to consumers is now a more seamless experience thanks to Google.

Last week, the search engine debuted “Shop the “Look,” a new apparel and home décor experience for its retail advertisers, allowing them to reach more consumers while helping brands increase visitor traffic and boost digital sales.

As more consumers browse and purchase items on their smartphones, it is crucial for retailers to create mobile-friendly brand strategies. According to a recent Google study, 90 percent of mobile users said they aren’t absolutely sure of the specific brands they want to purchase items from when they start shopping.

To help consumers discover products instantly, Google is building on ad experiences, including Showcase Shopping ads and Shopping ads on image search. Both options allow consumers to browse items, compare prices and purchase products without typical digital complications. The new version allows people to explore the world of fashion and shop products directly from curated, inspirational images on google.com.

First, consumers can type a particular wardrobe item, like a little black dress, into Google search. Once they hit enter, a picture of a popular fashion blogger wearing a little black dress, heels and a cross-body bag may pop up on their page. Consumers may then shop for exact or similar products found in the image with a few taps.

Shop the Look images are curated by fashion partners, including Polyvore, that feature content from brands, bloggers, publishers and retailers. Similar to standard shopping ad guidelines on Google, retail advertisers will be charged on a cost-per-click basis. Retailer advertisers interested in shop the look may register with Google Shopping Campaigns.

Via: SourcingJournalOnline

Intel buys chip maker Movidius to help bring computer vision to drones

By | Algorithms, Blog, Image Recognition, IoT | No Comments

Intel buys chip maker Movidius to help bring computer vision to drones

Intel’s RealSense computer vision platform has been lacking a low-powered way of recognizing what its depth-sensing cameras are seeing — until now. The chip giant is buying Movidius, the designer of a range of system-on-chip products for accelerating computer vision processing.

Movidius supplies chips to drone makers such as DJI and to thermal imaging company FLIR Systems, itself a supplier of DJI. Its chips help computers figure out what they are seeing through cameras like Intel’s RealSense by breaking down the processing into a set of smaller tasks that they can execute in parallel.

There are systems that already do this using GPUs, but those are relatively power-hungry, often consuming tens of watts. That’s not a problem in fixed applications with access to mains electricity, or in cars, which have huge batteries and a way to recharge them. But in drones or other lightweight IoT devices, power consumption needs to be much lower. Movidius aims for a design power of around one watt with its Myriad 2 vision processing units.

Having largely failed to get its Atom processors into smartphones, Intel is looking for ways to lever them into other devices, such as drones.

Josh Walden, senior vice president and general manager of Intel’s New Technology Group, sees potential for Movidius to help it create systems for drones, and also for augmented, virtual and merged reality devices, robots and security cameras, he said in a post to the company’s blog. It’s not just about the chips, he said: Intel is also buying algorithms developed by Movidius for deep learning, depth processing, navigation and mapping, and natural interactions.

Via PCWorld

CURATION AND ALGORITHMS

By | Algorithms, Curation | No Comments

CURATION AND ALGORITHMS

BY BEN THOMPSON

Jimmy Iovine spared no words when it came to his opinion of algorithms during the unveiling of Apple Music:

The only song that matters as much as the song you’re listening to right now is the one that follows this. Picture this: you’re in a special moment…and the next song comes on…BZZZZZ Buzzkill! It probably happened because it was programmed by an algorithm alone. Algorithms alone can’t do that emotional task. You need a human touch. And that’s why at Apple Music we’re going to give you the right song [and] the right playlist at the right moment all on demand.

About Beats 1, the new Apple Music radio station, Iovine added:

[It] plays music not based on research, not based on genre, not based on drum beats, only music that is great and feels great. A station that only has one master: music itself.

According to the Apple Music website “Zane Lowe and his handpicked team of renowned DJs create an eclectic mix of the latest and best in music”; then again, if you keep scrolling the page, you’re reminded there is more to Beats 1 than curated music:

Building your own station couldn’t be easier. Just select any song, album, or artist and it will practically build itself. Adjust the mix to hear more songs you know or discover unfamiliar gems. Love a track? We’ll play more like it. The more you fine-tune the station, the more personalized it becomes.

That sounds a bit like an algorithm. So which is more important, and why?

THE RISE OF CURATION

Curation has been all over the news for the past few weeks. At that same keynote Apple introduced Apple News, and while the presentation made it sound a bit like those user-generated radio stations — Craig Federighi introduced it as “Beautiful content from the world’s greatest sources personalized for you” — it turns out that Apple is hiring editors to, in the words of the Apple job posting, “Ensur[e] that important breaking news stories are surfaced quickly, and enterprise journalism is rewarded with high visibility.”Apple News is hardly the only effort in the space: a month previously the New York Times released version 2 of its NYT Now app; the big headline was that the app was now free, but just as interesting was the decision to decrease the number of articles from the New York Times itself and intersperse them with a nearly equal number of articles from other publications with the intent of providing a one-stop curated news experience.

BuzzFeed just released their own take on the concept with the BuzzFeed News app which adds tweets to a mix of BuzzFeed content and content from around the web, all helpfully summarized in easily digestible bullet points.Twitter itself announced plans to get in on the game with its forthcoming Project Lightning, a tool that, according to BuzzFeed, “will bring event-based curated content to the Twitter platform.” The articles notes:

Launch one of these events and you’ll see a visually driven, curated collection of tweets. A team of editors, working under Katie Jacobs Stanton, who runs Twitter’s global media operations, will select what it thinks are the best and most relevant tweets and package them into a collection…They’ll use data tools to comb through events and understand emerging trends, and pluck the best content from the ocean of updates flowing across Twitter’s servers. But human beings will decide which tweets to include.

Lightning hasn’t launched, but Snapchat’s Live Stories have been drawing in huge viewer numbers for some time now; they too are driven by curation: Recode reports that “the company has grown its team of Live Story curators from fewer than 10 people to more than 40 people” since January, and is now producing multiple events per day. Even Instagram is adding curation to its new Explore page.

WHEN CURATING MAKES SENSE

There are two important advantages to curation:

  • First, where context is critical to immediately determining how important something is — as is the case with news — human curators are, at least for now, superior to algorithms. Humans are also able to quickly identify that these forty stories are about the same event, and have the taste to decide which is the best option to present
  • Taste figures much more prominently when it comes to Apple Music and other similar endeavors. The DJ-focused Beats 1 “radio” station, for example, is clearly intended to make certain songs popular, not simply identify popularity after it is already attained. This in particular is a natural fit for Apple, and is the part of Apple Music I am most intrigued by: the company is most comfortable setting trends, not following them (as is the case with the core streaming service)

It’s possible that algorithms will one day be superior to humans at both of these functions, but I’m skeptical: the critical recognition of context and creativity are the two arenas where computers consistently underperform humans.

THE ALGORITHMIC GIANTS

That said, despite curation’s advantages the two biggest content players of all — Google and Facebook — are pure algorithmic plays. Google News has always been algorithmically driven, but the more important tool for content is Google search itself, which uses the most valuable algorithm in the world to not only find content but to rank it as well. Facebook, meanwhile, is in some respects the exact opposite of Google: rather than responding to an input Facebook proactively selects what you see when you open the app; that selection, though, is also 100% algorithmically driven.

Both search results and the news feed are algorithmically driven
Both search results and the news feed are algorithmically driven

When considering the question of what is better, algorithms or curation, I think this observation that the core Facebook and Google algorithms are actually solving two very different problems is a useful one. Google is seeking the single best answer to a direct query from an effectively infinite number of data points (i.e. the Internet); while the answer it gives is to a degree influenced by the profile Google has built about you, or the various contextual clues surrounding your search, for most queries there is one right answer that Google will return to anyone who searches for the term in question. In short, the data set is infinite (which means no human is capable of doing the job), but the target is finite. Facebook, on the other hand, creates a unique news feed for all of its 1.44 billion users: while Facebook has a huge amount of data,

the amount of information any one user will ever be interested in is finite; what is infinite are the number of targets (which means Facebook could never employ enough humans to do the job). In other words, neither Google nor Facebook are able to rely on curation even if they wanted to, but the reasons that Google and Facebook rely on algorithms differs:

Google searches an (effectively) infinite amount of data, while Facebook needs an (effectively) infinite amount of personalization, which is why both are algorithmically driven
Google searches an (effectively) infinite amount of data, while Facebook needs an (effectively) infinite amount of personalization, which is why both are algorithmically driven

However, as I just noted, these two reasons run in the opposite direction: Google does personalize a bit, but it mostly concerned with one right answer, while any single Facebook user doesn’t care and will never care about the vast majority of Facebook’s data. Presuming this relationship holds, you can actually put the above two graphs together:

Curation makes sense in the middle of Google and Facebook: some personalization, and a finite set of data to curate
Curation makes sense in the middle of Google and Facebook: some personalization, and a finite set of data to curate

This curve is a useful way to think about the aforementioned curation initiatives: curation works best when there is a good amount of data, but not too much, and the goal is a fair bit of personalization, but not on an individual basis.

CURATING NEWS

The Curation-Algorithm curve makes it clear why news is an obvious curation candidate: while a lot of news happens everywhere all the time, it’s still a lot less than the sum total of information on the Internet. Moreover, the sort of news most people care about tends to be relatively widely applicable, which means personalization is useful but only to a degree. In other words, news mostly sits at the bottom of this curve. Newspapers figured this out a long time ago: editors were curators, deciding what went on the front page, what was on page 13, and what was buried completely. It mostly worked, although many editors perhaps became too enamored with “prestige” stories like world news as opposed to truly understanding what readers wanted. Moreover, once the

Internet destroyed geographic monopolies, it quickly became apparent that most newspapers didn’t have the best content on the particular stories they covered; readers fled to superior alternatives wherever they happened to find them and curation gave way to social services like Twitter and Facebook.This is what makes the NYT Now and BuzzFeed News apps so interesting: both accept the idea that their respective publications don’t have a monopoly on the best content, even as both are predicated on the idea that curation remains valuable. Apple News takes this concept further by being completely publication agnostic.

THE TWITTER QUESTION

The current Twitter product, based on a self-curated time-line, doesn’t really fit well on the Curation-Algorithm curve. Power users, through the long and arduous process of following and unfollowing a huge number of people, can ultimately arrive at a highly personalized feed that is relevant to their interests. Beginners, though, are presented with a feed that is nominally about their interests as decided by a torturous first-run experience but which in reality is a stream of mumbo-jumbo.

Twitter struggles because it doesn’t have any products on the Curation-Algorithm curve
Twitter struggles in part because it doesn’t have any products on the Curation-Algorithm curve

Project Lightning is clearly focused on hitting the algorithmic sweet spot with event-based “channels”: it’s an obvious move that should have been done years ago. What is perhaps more interesting, though, is whether Twitter ought to pursue an algorithmic feed: I think the answer is “Yes”. While Twitter’s value is its interest graph, its organizing principle to date has been people; an algorithmic feed would help Twitter more effectively bridge that disconnect.

CURATING ETHICS

There is one more big reason why tech companies have previously given curation short shrift, and it’s the flipside of Apple’s efforts with Music: it is a lot easier to abscond with responsibility for what you display if you can blame it on an algorithm. Human curation, on the other hand, makes it explicitly clear who is responsible for what is seen by the curating company’s users. The potential quandaries are easy to imagine: will Apple’s News app highlight a story about worker conditions in China?

Will Snapchat’s planned coverage of the 2016 election favor one candidate over the other? Would Twitter have created an “event” around the exit of its CEO?On the other hand, hiding behind algorithms is increasingly untenable as well. For one, algorithms aremade by humans; choosing which story appears in your Facebook feed is the responsibility of Facebook whether they choose it explicitly or implicitly via an algorithm. Google, for its part, hassuccessfully argued that its algorithm is protected free speech, an admission of ultimate responsibility even more profound than the company’s regular algorithmic updates explicitly designed to adjust rankings.Google in particular has a special responsibility. I wrote in Economic Power in the Age of Abundance:
The Internet is a world of abundance, and there is a new power that matters: the ability to make sense of that abundance, to index it, to find needles in the proverbial haystack. And that power is held by Google. Thus, while the audiences advertisers crave are now hopelessly fractured amongst an effectively infinite number of publishers, the readers they seek to reach by necessity start at the same place – Google – and thus, that is where the advertising money has gone.

Google’s position as the Internet chokepoint has been exceptionally profitable, but with great power comes great responsibility: in a welcome development Google is slowly accepting said responsibility and delisting revenge porn upon request. It’s the right move for both moral and practical reasons — moral because Google is uniquely positioned to prevent people’s lives from being ruined, and practical because if Google didn’t take action eventually the government would compel them. Indeed, that has already happened in Europe with the “right to be forgotten”, and while there is certainly a debate to be had as to whether or not that is good policy, the idea that Google is a hapless bystander is no longer viable.


Ultimately, I see the embrace of curation as a mark of maturation of the technology industry. Today’s technology companies have massive amounts of influence over what people the world over see and consume, and while there is a long ways to go when it comes to transparency about what is seen and why, at least everyone is now being honest about possessing that power in the first place.

Moreover, I’m excited about the real user benefit that can come from balancing algorithms and curation: while Facebook and Google rightly focus on algorithms only, most content is best delivered by a mixture; getting that mixture right will likely prove to be both massively popular and massively valuable.

A reminder that your Instagram photos aren’t really yours: Someone else can sell them for $90,000

By | Uncategorized | No Comments

This is an excerpt from Washington Post and written by Jessica Contrera May 25
imrs
Richard Prince’s Instagram screenshots at Frieze Art Fair in New York. (Marco Scozzaro/Frieze)
The Internet is the place where nothing goes to die.

Those embarrassing photos of your high school dance you marked “private” on Facebook? The drunk Instagram posts? The NSFW snapchats? If you use social media, you’ve probably heard a warning akin to “don’t post anything you wouldn’t want your employer (or future employer) to see.”

We agree, and are adding this caveat: Don’t post anything you wouldn’t want hanging in an art gallery.

This month, painter and photographer Richard Prince reminded us that what you post is public, and given the flexibility of copyright laws, can be shared — and sold — for anyone to see. As a part of the Frieze Art Fair in New York, Prince displayed giant screenshots of other people’s Instagram photos without warning or permission.

The collection, “New Portraits,” is primarily made up of pictures of women, many in sexually charged poses. They are not paintings, but screenshots that have been enlarged to 6-foot-tall inkjet prints. According to Vulture, nearly every piece sold for $90,000 each.

How is this okay?

First you should know that Richard Prince has been “re-photographing” since the 1970s. He takes pictures of photos in magazines, advertisements, books or actors’ headshots, then alters them to varying degrees. Often, they look nearly identical to the originals. This has of course, led to legal trouble. In 2008, French photographer Patrick Cariou sued Prince after he re-photographed Cariou’s images of Jamaica’s Rastafarian community. Although Cariou won at first, on appeal, the court ruled that Prince had not committed copyright infringement because his works were “transformative.”

In other words, Prince could make slight adjustments to the photos and call them his own.

Prince’s 1977 work “Untitled (four single men with interchangeable backgrounds looking to the right),” which is made of photos that previously appeared in print. (Metropolitan Museum of Art, “The Pictures Generation” exhibition)
This is what he did with the Instagram photos. Although he did not alter the usernames or the photos themselves, he removed captions. He then added odd comments on each photo, such as “DVD workshops. Button down. I fit in one leg now. Will it work? Leap of faith” from the account “richardprince1234.” The account currently has 10,200 followers but not a single picture — perhaps so you can’t steal his images in return?

“New Portraits” first debuted last year at Gagosian Gallery on Madison Avenue, the same location where the artist displayed the Rastafarian images he was sued for.

If someone wanted to argue that this collection is not “transformative” enough to be legal, they would have to file a lawsuit on their own. Upon seeing this story, a spokesman from Instagram said:

“People in the Instagram community own their photos, period. On the platform, if someone feels that their copyright has been violated, they can report it to us and we will take appropriate action. Off the platform, content owners can enforce their legal rights.”

Basically, if someone copies your Instagram to an account of their own, the company can do something about it. If they copy your work to somewhere outside of the social network, like a fancy New York gallery, you’re on your own.

Prince appears to be enjoying the controversial attention. He has been re-tweeting and re-posting his many critics.

How Stephen Wolfram’s image-recognition tool performs against 5 alternatives

By | Image Recognition | No Comments

Is that a magic mushroom? ImageIdentify thinks it is.

Above: Is that a magic mushroom? ImageIdentify thinks it is.

Image Credit: Screenshot

This week Stephen Wolfram, founder and chief executive of Wolfram Research, announced a new component of the Wolfram Language for programming called ImageIdentify. Wolfram also introduced a new website, dubbed The Wolfram Language Image Identification Project, that demonstrates the language’s new capabilities.

The new site lets you upload images and get inferences and definitions in response. You can provide feedback, which should help it become more accurate. You can hit buttons like “Great!,” “Could be better,” “Missed the point,” and “What the heck?!” After you choose one, the service offers a few more guesses, and a text box where you can type in a tag. Then you can type in your email address, so it can tell you “when ImageIdentify learns more about your kind of image.”

The service uses a trendy type of artificial intelligence called deep learning. It draws on artificial neural networks, which train on a large quantity of information, like pictures, and then make inferences when you give it new information, like a new picture. Big web companies like FacebookGoogle, and Microsoftuse deep learning for various purposes, and increasingly smaller companies have been exposing deep learning tools for pretty much anyone to try out.

To get a rough sense of the power of the new Wolfram technology, I decided to put it up against other existing image-recognition systems you can test out on the Internet today, from CamFindClarifaiMetaMindOrbeus, and IBM-owned AlchemyAPI. I chose images from Flickr that seemed to clearly fall into the 1,000 categories used for the 2014 ImageNet visual recognition competition. It was unscientific — just for the sake of curiosity.

What I found is that Wolfram’s new system doesn’t seem to be all that bad. It wasn’t overly conservative or vague, and it didn’t make many obvious mistakes — although it wasn’t as consistently accurate as MetaMind, for one. With time, Wolfram’s technology should improve — especially as people point out its flaws.

Here are 10 of the tests I ran to reach my conclusion.

1.Coffee mug

Wolfram ImageIdentify: tea
CamFind: white ceramic mug
Clarifai: coffee cup nobody tea mug cafe hot ceramic coffee cup cutout
MetaMind: Coffee mug
Orbeus: cup
AlchemyAPI: coffee


2. Mushroom

Wolfram ImageIdentify: magic mushroom
CamFind: white mushroom
Clarifai: mushroom fungi fungus toadstool nature grass fall moss forest autumn
MetaMind: Mushroom
Orbeus: fungus
AlchemyAPI: mushroom


3. Spatula

Wolfram ImageIdentify: spatula
CamFind: black kitchen turner
Clarifai: steel wood knife handle iron fork equipment nobody tool chrome
MetaMind: spatula
Orbeus: tool
AlchemyAPI: knife


4. Scoreboard

Wolfram ImageIdentify: scoreboard
CamFind: baseball scoreboard
Clarifai: scoreboard soccer stadium football game competition goal group north America match
MetaMind: Scoreboard
Orbeus: billboard
AlchemyAPI: sport


5. German shepherd

Wolfram ImageIdentify: German shepherd
CamFind: black and brown German shepherd
Clarifai: dog canine cute puppy mammal loyalty grass sheepdog fur German shepherd
MetaMind: German Shepherd, German Shepherd Dog, German Police Dog, Alsatian
Orbeus: animal
AlchemyAPI: dog


6. Toucan

Wolfram ImageIdentify: tufted puffin
CamFind: toucan bird
Clarifai: bird one north America nobody animal people adult nature two outdoors
MetaMind: toucan
Orbeus: animal
AlchemyAPI: sport


7. Indian cobra

Wolfram ImageIdentify: black-necked cobra
CamFind: brown and beige cobra snake
Clarifai: snake nobody reptile cobra wildlife daytime sand rattlesnake north America desert
MetaMind: Indian cobra, Naja Naja
Orbeus: animal
AlchemyAPI: snake


8. Strawberry

Wolfram ImageIdentify: strawberry
CamFind: red strawberry ruit
Clarifai: fruit sweet food strawberry ripe juicy berry healthy isolated delicious
MetaMind: strawberry
Orbeus: strawberry
AlchemyAPI: berry


9. Wok

Wolfram ImageIdentify: cooking pan
CamFind: gray steel frying pan
Clarifai: ball nobody pan cutout kitchenware north America tableware competition bowl glass
MetaMind: wok
Orbeus: frying pan
AlchemyAPI: (No tags)


10. Shoe store

Wolfram ImageIdentify: store
CamFind: black crocs
Clarifai: colour street people color car mall road fair architecture hotel
MetaMind: Shoe Shop, Shoe Store
Orbeus: shoe shop
AlchemyAPI: sport

TURNS OUT THERE ARE A LOT OF ACADEMICS STUDYING PHOTO FILTERS

By | Filters, Research | No Comments
This is an excerpt written by Molly McHugh 
5/24/15 in Wired

IF YOU’RE LIKE most people on Instagram, you’ll scroll through all 22 filters, carefully consider the nuances of Inkwell vs. Lo-Fi vs. Hudson, and then settle on one of the filters you always use. Oh sure, there are  so many filters, but you always go back to your favorites “just because.”

Turns out it isn’t “just because.” There are some specific reasons you rely upon your old faithfuls, and a growing body of science examining how and why people choose filters and how those choices influence others’ reactions to the photo. According to a study out of Yahoo Labs, researchers looked at 7.6 million Flickr photos (many of which originated on Instagram and were uploaded to Flickr) and found “filtered photos are 21 percent more likely to be viewed and 45 percent more likely to be commented on.”

This study is but a drop in a fairly shallow pool: Despite mobile photography’s massive popularity, it’s been largely ignored by academics. “There is little work—scholarly or otherwise—around filters, their use, and their effect on photo-sharing communities,” the Yahoo Labs study explains. That’s due in part to photos being harder than text to analyze, but that shouldn’t be an excuse anymore, especially given the active commenting community on Instagram and other social media.

The Yahoo Labs team is not alone in its fascination. Researchers at Arizona State University have been studying Instagram and its filters since last year. “We were (and continue to be) motivated by the fact that Instagram has received very little attention from the research community,” says one of them, Subbarao Kambhampati “We believe that a careful analysis of Instagram can give us a valuable window into our collective online behavior.”

The Yahoo Study focused specifically on filters, and found people like higher contrast and corrected exposure, and find a warmer temperature more appealing than a cooler one. “Serious hobbyists” use filters only to correct a problem—say, correct the exposure. “More casual photographers” are more likely to manipulate their images with filters or adjustments that make them appear more “artificial,” according to the study.

Example of an original photo (top left photo) and many filtered variations. Some filters change contrast, brightness, saturdation; some add warmth or cool colors or change the borders.

One particularly interesting part of the research examined just who’s using Flickr. A few years ago, the “Flickr vs. Instagram” debate could be cast as “real camera vs smartphone camera.” That’s no longer the case. The iPhone rules all.

“The iPhone has been the most popular camera for years now,” says David Ayman Shamma, one of the Yahoo researchers. The proliferation of the iPhone and smartphones in general has lead to photography of all kinds becoming a creative outlet for millions. “It’s been the dream since the Kodak Brownie. With it, comes the creative space for many outlets and photographers, from people shooting on vintage film to food bloggers.”

Shamma says profiling Flickr users is a more complex task, because so many people use it for so many things. “Some people on Flickr only want to push their best portfolio pieces from DSLRs, while others publish their daily lives from their iPad camera, and many do a mix of cameras and content.”

No matter what you shoot or where you post it, you will be heartened to hear filter snobbery is dying. Pro shooters who once sneered at Instagram obsessives and their love of Rise, Mayfair, and X-Pro II aren’t quite as judgmental as they used to be (or, perhaps, as we only thought they were). These days, everyone uses filters.

“One of the surprising things to me was the pro set was talking fondly about the filters,” Shamma says. “Not that I thought they’d be snobby and elitist before the study, but I assumed they’d rather use their software tools of the trade over a one-click filter.” If you don’t want to say it, it’s OK, I will: I thought they would be snobby and elitist.

Of course, it’s impossible to talk about filters without talking about Instagram, because it is the world’s most popular photo app (it’s one of the world’s most popular apps, period). It seems there could be a difference between people who use Instagram to take and manipulate photos and those who do so with Flickr, or someone who uses Flickr to edit a photo and then cross-posts it to Instagram (or vice versa).

“There are similarities and differences for sure and we can see them by looking at what’s uploaded to Flickr via the Flickr app and what’s uploaded to Flickr through Instagram,” Shamma says. “The nature photos on Flickr from Instagram show more engagement when they are filtered, so it’s a function of what sub-community you’re speaking to.”

But Shamma says there are unifying factors when it comes to filters, regardless of platform or skill level. Could someone, therefore, use this research to design the perfect filter? In a word, no. “As awesome as something automatic might sound, there really is no silver bullet here,” Shamma says.

That’s because there’s more at play than how a filter looks. In many cases, the act of choosing the filter is equally important. “When we interviewed people for this study, we found that the photographers, regardless of skill level, enjoyed the process of selecting a filter,” he says. This explains why people painstakingly scroll through them all before invariably choosing a favorite. Even if someone could engineer the perfect filter, people wouldn’t want to lose out on seeing their photos transformed by all those filters. The element of choice, the function of looking and choosing, is one reason people so love filters in the first place.

Instagram filters

In case you were wondering, another study, in 2014 by Arizona State researcher, identified the most popular Instagram filters. They are, in descending order of popularity, no filter, Amaro, X-Pro II, Valencia, and Rise. Seeing “no filter” is a bit of a shock, given what Yahoo’s study says, but what’s most interesting is that the most popular filters may be the most popular because everyone thinks they’re popular. “These top five filters are actually present in the first seven filters of Instagram GUI at that time,” Kambhampati, explains. “This brings up the possibility that the (accidental?) placement of filters has more to do with their eventual popularity than any conscious photographic choice by the Instagram user.” The same study also found that there are generally only a few categories of Instagram photos: friends, food, gadgets, quote pics, pets, activities, selfies, and fashion.

Instagram categories

The researchers are still digging into Instagram, and plan to look at how the social network diffuses information and just what makes an image “go viral.” Such questions have been asked of other social networks, but not Instagram. “We have come up with a number of indirect measures to study diffusion (in particular, by studying the number of ‘likes’ and comments received in terms of the number of hops separating the liking and commenting user from the posting user.” The researchers soon will present a paper showing how we can glean sentiment from Instagram images with the help of “image features and the features from the textual comments.” Perhaps someday soon, we won’t simply know what Instagram filters are popular, but also how they make us feel.

 

Flickr Considers Letting Users Opt-Out Of Auto-Tagging

By | Attribution, Image Recognition, Tagging | No Comments

This is an excerpt from TechCrunch and Posted  by  

Not everyone was happy with last week’s major revamp of Yahoo-owned photo-sharing site Flickr. A small, but very vocal, portion of Flickr’s user base of 100 million members, immediately took to the forums to lament the fact that the site’s new “auto-tagging” feature was enabled by default, and, worse, that there was no opt-out option provided. But that may now be changing, we understand.Flickr recently introduced a series of upgrades to its service on the web and on mobile designed to make every aspect of photo editing, organization and sharing easier on its service. A couple of the more notable changes were the addition of auto-tagging and new image-recognition capabilities. Combined, these features allow Flickr to identify what’s in a photo, and then automatically categorize it on users’ behalf by adding tags. This, in turn, makes images easier to surface by way of search.

Flickr_Web Search

Auto-tagging especially makes sense in today’s highly mobile age, where users take large numbers of photos and most no longer have the time or inclination to carefully group them or categorize them by manually adding tags. Tags, after all, are a holdover from an earlier time – the not-too-distant past before we all carried smartphones in our pockets capable of taking quality photos.

But for many Flickr users, tags are something they still feel strongly about, judging by the forum’s many comments. With over 1,370 replies to the official Flickr post (and growing), these users have been venting their frustration about the addition of auto-tagging. Many of those commenting have actually been fairly conscientious about their tags over the years, and don’t like that Flickr is now adding its own tags to their photos.

In addition, several also complain that Flickr’s auto-tags simply aren’t that accurate. In some cases, those mistakes are somewhat benign – a BMW gets tagged as a Ferrari, for example. But other times, they can be really terrible – as in the case of a user whose Auschwitz photos were incorrectly tagged as “sport,” for instance.

The problem lies with the fact that an algorithmic system of tagging is never going to be perfect – though it is capable of improving over time based on users’ corrections. But some are unwilling to wait for that training process to occur. They just want out. Period.

However, Flickr doesn’t offer an option to disable the auto-tagging at all, which is a rather bold stance to take. And while users can batch edit a group of tagged photos, they can’t edit auto-generated tags. So the only way to edit the auto-generated tags is to go into each photo individually. This is far too time-consuming for most people to manage, which is why so many are upset.

But Flickr tells us that it’s taking the community feedback on the matter seriously, and is evaluating an option that would allow an opt-out of the automated tagging. The option is not yet being built, but it is at least being actively discussed, from what we understand.

The company further explains that auto-tagging is actually a fairly crucial part to the upgraded service, as it is what powers a number of the new features, including the “Magic View,” which helps users organize and share their photos based on topic, as well as the new search tools and other “future features” still in the works. That could explain why Flickr felt strongly enough about auto-tagging to not make it an opt-in option in the first pace, as well as why there’s no “off” switch for the time being.

While likely a large majority of consumers won’t care (or maybe even notice), for those power users and others who rely heavily on Flickr as their main online image repository, adding the “opt-out” option – even as a gesture to the community – would be appreciated.

SQUARE PIXEL INVENTOR TRIES TO SMOOTH THINGS OUT

By | Blog | No Comments
This article is an excerpt from Wired. and written by  SCIENCE. 06.28.10

Stock Photography vs. Real Photos

By | Blog | No Comments

This is an excerpt from Tommy Walker‘s article on ConversionXL

When it comes to online imagery, it’s not so much about having images, as it is about making sure those images to give the visitor a sense of texture, size, scale, detail, context & brand. According to MDG Advertising, 67% of online shoppers rated high quality images as being “very important” to their purchase decision, which was slightly more than “product specific information”, “long descriptions”, and “reviews & ratings”:

It’s All About the Images

Joann Peck & Suzanne B. Shu of UCLA published a study called “The Effect of Mere Touch on Perceived Ownership” that found that when the imagery of an object was vivid and detailed, it increased their perceived ownership of the product.

Picture Heat Map

Moreover, Psychologists Kirsten Ruys & Diedrick Stapel of the Tilburg Institute for Behavioral Economics Research found that imagery has the ability to affect a person’s mood, even when they’re unaware it is happening. In their research, they flashed images across a screen in a manner that made it impossible for participants to be fully conscious of what they were seeing. Participants were then tested on cognition, feelings & behavior, and in the end it was found that their general mood reflected the images they were subconsciously exposed to.

So Why The Hell Do You INSIST On Using Stock Photography?!

Alright, look… I get it. You’re on a budget. You need an image that represents “freedom” or “happiness” or ::shutter:: “corporate synergy”.

You’ve diplomatically explained to the client that they really should be using custom photography, but they insist you find a “better/cheaper representation online.” You’ve also gotten that uneasy vibe they’ll invoke “the customer is always right/I can take my business elsewhere” conversation, if you push too hard.

So you go to iStockphoto or Shutterstock, run a query, and try to find the best representation of whatever vague concept you’ve been given as a part of the brief. You pay, download the stock photo, jury-rig it into your design & look at your work with a mixed sense of pride & shame. But the client LOVES it! (“See, looks like Stock wasn’t so bad after all, was it Mr. Designer?”)

Here’s the problem:

Reverse Image Search

2931 results TinEyeEvery other poor schmuck in every other vertical has used the Exact. Same. Photograph. And if you’re really unfortunate, one of those other schmucks was also be your competitor.

Meet The Everywhere Girl
hr_stockBack in 1996, Jennifer Anderson posed for a stock photo shoot shortly after graduating college. At the time, companies would subscribe to a service & receive their stock photos on a CD-ROM. Trouble was, the companies receiving the CD’s didn’t have an easy way to verify who else was using the photo, and the license for the images was not exclusive – meaning anyone could use them. Within a few years, Jennifer became the face of college girls in what seemed to be every marketing campaign. The most notorious faux pas was in 2004, when PC competitors Dell & Gateway used photos from the same photo shoot in their “Back to School” promotional material.

But did it stop there? Nope. Other companies who ended up photos from Jenn’s stock shoot were:

  • H&R Block
    Samsung
    Microsoft
    Grayhound
    US Bank
    AAA Auto Insurance
    A series of books about Christianity
    A teen chat line
    A car stereo store
    An actuary website
    Jenn’s image became so common online, that there were online communities that were dedicated to reporting sightings of this stock photo model around the web.

Why You Have To Be Careful With How You Use Stock Photos
While Jenn’s story is comical in it’s own right, there are some pretty serious negative connotations for brands inadvertently using the same stock photo to represent the same concept.

Looking at you Customer Service girl
The main problem is what’s called the Picture Superiority Effect, where “concepts are much more likely to be remembered experimentally if they are presented as pictures rather than as words.”

According to Wikipedia, this has to do with Allan Paivio’s “dual-coding theory” that states that mental associations become stronger when they’re presented both visually & verbally (or through text). “Visual and verbal information are processed differently and along distinct channels in the human mind, creating separate representations for information processed in each channel. The mental codes corresponding to these representations are used to organize incoming information that can be acted upon, stored, and retrieved for subsequent use.” This applies to both positive & negative experiences. Considering that nearly 2 million Americans fall victim to online scams a year, and many scam sites lean heavily on low priced stock photography… the odds are not in your favor. We already know from the “The Science of Storytelling & It’s Effect on Memory” article, that when a visitor lands on your site for the first time, everything they see is being processed through their working memory – the hyper-short term memory that pulls information from your long term memory to make judgements on what it sees within milliseconds.

Working-memory-2-1

If the stock photo you’re using is at all similar to another website that created a negative experience for the visitor, subconsciously, they’re projecting their negative experiences onto your stock photograph, reducing trust & adding friction to the process.

This is likely the real reason why when Marketing Experiments tested a real photo of their client against their top performing stock photo, they found that nearly 35% of visitors would be more likely to sign up when they saw the real deal. Taken to an extreme, using the wrong stock photography could also result in a form of “mistaken identity.” Though this article isn’t specific to using stock photography, the story of Arizona Discount Movers perfectly illustrates what could happen when the good guys get penalized for something the bad guys did.

Stock photos in & of themselves can be a useful, quick & effective way to communicate your point, but you should probably follow a few steps to make sure you’re getting the most out of stock photography.

Step 1 – See Who Else Is Using That Stock Photo
This is where a tool called TinEye comes in very handy to do a “reverse image search” to see where else that photo has been used: If you get something like “168 results”, take the time to investigate who else has used that image, and how they’ve used it. If they cater to a similar market and/or have a huge reach, find a different stock photo. The last thing you want is to try and be unique by using a photo everyone’s already seen. For added peace of mind, go to Google Images and drag the photo into the search bar. Google will pull up all of the exact instances of that photo, so you can see if there’s anything that TinEye had missed.

Google Image Search Results. Step 2 (optional)- Check To See If You Can Get A “Rights Managed” License. If the image in question hasn’t been used by everyone in the known world, check to see you can keep that way. A rights managed license makes it so you have exclusive use of that image within the markets you specify for a specified time frame.

Rights Managed Time-Frame

Even though these licenses are more expensive, this license is huge insurance against anyone else using your image, thereby preventing an “Everywhere Girl” scenario of your own. To read the rest of the original story on ConversionXL click here: http://conversionxl.com/stock-photography-vs-real-photos-cant-use/

Conclusion

If you must use stock photography, make sure it’s on brand, not grossly overused & do what you can to make it your own. Basic and advanced photo-manipulation tactics can transform stock photos into completely unique pieces; they just take a little more time to create. But also, don’t be afraid to take your own photos either.

It’s amazing how much quality is packed into smartphones and other less expensive camera options. With a little planning & some basic knowledge on how lighting & composition work, you can take unique, high quality photographs that better represent your brand.

Featured image source
http://conversionxl.com/stock-photography-vs-real-photos-cant-use/

The Watermark Project

By | Attribution, Blog, Watermarks | No Comments

The Watermark Project from George Prest on Vimeo.

“How do you change perception of a billion dollar company? Not with advertising but by changing the very interface that made them less than popular in the first place. By changing their product.
This is the first work that R/GA London has done for one of its newest clients, Getty Images.
We’re dead proud of it.” -George Prest

The Watermark Project

“Smart” Software Can Be Tricked into Seeing What Isn’t There

By | Blog, Image Recognition | No Comments

Humans and software see some images differently, pointing out shortcomings of recent breakthroughs in machine learning.

By Caleb Garling on December 24, 2014 read the full original article here: TechnologyReview

deeplearning

WHY IT MATTERS

Image recognition algorithms are becoming widely used in many products and services.

Images like these were created to trick machine learning algorithms. The software sees each pattern as one of the digits 1 to 5.

A technique called deep learning has enabled Google and other companies to make breakthroughs in getting computers to understand the content of photos. Now researchers at Cornell University and the University of Wyoming have shown how to make images that fool such software into seeing things that aren’t there.

The researchers can create images that appear to a human as scrambled nonsense or simple geometric patterns, but are identified by the software as an everyday object such as a school bus. The trick images offer new insight into the differences between how real brains and the simple simulated neurons used in deep learning process images.

Researchers typically train deep learning software to recognize something of interest—say, a guitar—by showing it millions of pictures of guitars, each time telling the computer “This is a guitar.” After a while, the software can identify guitars in images it has never seen before, assigning its answer a confidence rating. It might give a guitar displayed alone on a white background a high confidence rating, and a guitar seen in the background of a grainy cluttered picture a lower confidence rating (see “10 Breakthrough Technologies 2013: Deep Learning”).

That approach has valuable applications such as facial recognition, or using software to process security or traffic camera footage, for example to measure traffic flows or spot suspicious activity.

But although the mathematical functions used to create an artificial neural network are understood individually, how they work together to decipher images is unknown. “We understand that they work, just not how they work,” says Jeff Clune, an assistant professor of computer science at the University of Wyoming. “They can learn to do things that we can’t even learn to do ourselves.”

These images look abstract to humans, but are seen by the image recognition algorithm they were designed to fool as the objects described in the labels.

To shed new light on how these networks operate, Clune’s group used a neural network called AlexNet that has achieved impressive results in image recognition. They operated it in reverse, asking a version of the software with no knowledge of guitars to create a picture of one, by generating random pixels across an image.

The researchers asked a second version of the network that had been trained to spot guitars to rate the images made by the first network. That confidence rating was used by the first network to refine its next attempt to create a guitar image. After thousands of rounds of this between the two pieces of software, the first network could make an image that the second network recognized as a guitar with 99 percent confidence.

However, to a human, those “guitar” images looked like colored TV static or simple patterns. Clune says this shows that the software is not interested in piecing together structural details like strings or a fretboard, as a human trying to identify something might be. Instead, the software seems to be looking at specific distance or color relationships between pixels, or overall color and texture.

That offers new insight into how artificial neural networks really work, says Clune, although more research is needed.

Ryan Adams, an assistant computer science professor at Harvard, says the results aren’t completely surprising. The fact that large areas of the trick images look like seas of static probably stems from the way networks are fed training images. The object of interest is usually only a small part of the photo, and the rest is unimportant.

Adams also points out that Clune’s research shows humans and artificial neural networks do have some things in common. Humans have been thinking they see everyday objects in random patterns—such as the stars—for millennia.

Clune says it would be possible to use his technique to fool image recognition algorithms when they are put to work in Web services and other products. However, it would be very difficult to pull off. For instance, Google has algorithms that filter out pornography from the results of its image search service. But to create images that would trick it, a prankster would need to know significant details about how Google’s software was d

How machine learning and image recognition could revolutionise search

By | Blog, Image Recognition | No Comments
 IN DEPTH Unlocking information from images
 By Mary Branscombe December 25th on TechRadar

machine-learning-image-captions-578-80

Introduction
How machine learning and image recognition could revolutionise search
A machine learning system is capable of writing an image caption as well as a person
Related stories
Microsoft’s new Sway app: Office isn’t copying paper documents any more
How Kinect and analytics could boost sales in bricks-and-mortar stores
Speech recognition software: top six on the market
Text in documents is easy to search, but there’s a lot of information in other formats. Voice recognition turns audio – and video soundtracks – into text you can index and search. But what about the video itself, or other images?
Searching for images on the web would be a lot more accurate if instead of just looking for text on the page or in the caption that suggests a picture is relevant, the search engine could actually recognise what was in the picture. Thanks to machine learning techniques using neural networks and deep learning, that’s becoming more achievable.
Caption competition

When a team of Microsoft and Facebook researchers created a massive data dump of over 300,000 images with 2.5 million objects labelled by people (called Common Objects in Context), they said all those objects are things a four-year-old child could recognise. So a team of Microsoft researchers working on machine learning decided to see how well their systems could do with the same images – not just recognising them, but breaking them up into different objects, putting a name to each object and writing a caption to describe the whole image.
To measure the results, they asked one set of people to write their own captions and another set to compare the two and say which they preferred.
“That’s what the true measure of quality is,” explains distinguished scientist John Platt from Microsoft Research. “How good do people think these captions are? 23% of the time they thought ours were at least as good as what people wrote for the caption. That means a quarter of the time that machine has reached as good a level as the human.”
Part of the problem was the visual recogniser. Sometimes it would mistake a cat for a dog, or think that long hair was a cat, or decide that there was a football in a photograph of people gesticulating at a sculpture. This is just what a small team was able to build in four months over the summer, and it’s the first time they had a labelled a set of images this large to train and test against.
“We can do a better job,” Platt says confidently.
Machine strengths

Machine learning already does much better on simple images that only have one thing in the frame. “The systems are getting to be as good as an untrained human,” Platt claims. That’s testing against a set of pictures called ImageNet, which are labelled to show how they fit into 22,000 different categories.
“That includes some very fine distinctions an untrained human wouldn’t know,” he explains. “Like Pembroke Welsh corgis and Cardigan Welsh corgis – one of which has a longer tail. A person can look at a series of corgis and learn to tell the difference, but a priori they wouldn’t know. If there are objects you’re familiar with you can recognise them very easily but if I show you 22,000 strange objects you might get them all mixed up.” Humans are wrong about 5% of the time with the ImageNet tests and machine learning systems are down to about 6%.
That means machine learning systems could do better at recognising things like dog breeds or poisonous plants than ordinary people. Another recognition system called Project Adam, that MSR head Peter Lee showed off earlier this year, tries to do that from your phone.
Project Adam

Project Adam was looking at whether you can make image recognition faster by distributing the system across multiple computers rather than running it on a single fast computer (so it can run in the cloud and work with your phone). However, it was trained on images with just one thing in them.
“They ask ‘what object is in this image?'” explains Platt. “We broke the image into boxes and we were evaluating different sub-pieces of the image, detecting common words. What are the objects in the scene? Those are the nouns. What are they doing? Those are verbs like flying or looking.
“Then there are the relationships like next to and on top of, and the attributes of the objects, adjectives like red or purple or beautiful. The natural next step after whole image recognition is to put together multiple objects in a scene and try to come up with a coherent explanation. It’s very interesting that you can look in the image and detect verbs and adjectives.”

Making images useful

There are plenty of ways in which having your images automatically captioned and labelled will be useful, especially if you’re a keen photographer trying to stay on top of your image library or a news site looking for the right photograph.
“Indexing your photos by who’s in them is a very natural way to way to think about organising photos,” Platt points out. With more powerful labelling, you can search for objects in images (a picture of a cat) or actions (a picture of a cat drinking) or the relation between different objects in an image. “If I remember that I had a picture of a boy and a horse, I’d like to be able to index that – both the objects of the boy and the horse, and the relation between them – and put them in an index so I can go and search for them later.”
If you’re putting together a catalogue of products, having an automatically generated caption might be useful, but Platt doesn’t see much demand for something that specific. There is a lot of interest from different product teams at Microsoft, he says, but instead of creating captions for you he expects that “the pieces will be used in various products; behind the scenes, these bits will be running.”
Search relevance

Dealing with videos will mean making the recognition faster, and working out how to spot what’s interesting (because not every frame will be). But what’s important here is not just the speed, but the way the kind of understanding that underlies captioning complex images could transform search.
The deep learning neural networks and machine learning systems this image recognition uses are the same technologies that have revolutionised speech recognition and translation in the last few years (powering Microsoft’s upcoming Skype Translator). “Every time you talk to the Bing search engine on your phone you’re talking to a deep network,” says Platt. Microsoft’s video search system, MAVIS, uses a deep network.
The next step is to do more than recognise, and actually understand what things mean.
“Even for text there’s a fair amount of work and that’s where there’s a lot of interesting value, if we can truly understand text as opposed to just doing keyword search. Just doing keyword search gets you a long way, that’s how all of our search engines work today. But imagine if you had a system that could truly understand what your documents were about and truly be an assistant to you.”
The goal, he says, is to “try to truly understand the semantics of objects like video or speech or image or text, as opposed to the surface forms like just the words or just the colours.”

How machine learning and image recognition could revolutionise search

Smile! Marketing Firms Are Mining Your Selfies

By | Attribution, Blog, Image Ads, Image Recognition, Monetization | No Comments

Excerpt By DOUGLAS MACMILLAN
and ELIZABETH DWOSKIN 

Most users of popular photo-sharing sites like Instagram, Flickr and Pinterest know that anyone can view their vacation pictures if shared publicly.

But they may be surprised to learn that a new crop of digital marketing companies are searching, scanning, storing and repurposing these images to draw insights for big-brand advertisers.

Some companies, such as Ditto Labs Inc., use software to scan photos—the image of someone holding a Coca-Cola can, for example—to identify logos, whether the person in the image is smiling, and the scene’s context. The data allow marketers to send targeted ads or conduct market research.

Others, such as Piqora Inc., store images for months on their own servers to show marketers what is trending in popularity. Some have run afoul of the loose rules on image-storing that the services have in place.

The startups’ efforts are raising fresh privacy concerns about how photo-sharing sites convey the collection of personal data to users. The trove is startling: Instagram says 20 billion photos have already been shared on its service, and users are adding about 60 million a day.

The digital marketers gain access to photos publicly shared on services like Instagram or Pinterest through software code called an application programming interface, or API. The photo-sharing services, in turn, hope the brands will eventually spend money to advertise on their sites.

Privacy watchdogs contend these sites aren’t clearly communicating to users that their images could be scanned in bulk or downloaded for marketing purposes. Many users may not intend to promote, say, a pair of jeans they are wearing in a photo or a bottle of beer on the table next to them, the privacy experts say.

A screenshot of the Ditto Labs site shows the fire hose of photos that it scans for brands. The site filters photos by categories such as beer. ENLARGE
A screenshot of the Ditto Labs site shows the fire hose of photos that it scans for brands. The site filters photos by categories such as beer. DITTO LABS
“This is an area that could be ripe for commercial exploitation and predatory marketing,” said Joni Lupovitz, vice president at children’s privacy advocacy group Common Sense Media. “Just because you happen to be in a certain place or captured an image, you might not understand that could be used to build a profile of you online.”

In recent years, startups have begun mining text in tweets or social-media posts for keywords that indicate trends or sentiment toward brands. The market for image-mining is newer and potentially more invasive because photos inspire more emotions in people and are sometimes open to more interpretation than text.

Instagram, Flickr and Pinterest Inc.—among the largest photo-sharing sites—say they adequately inform users that publicly posted content might be shared with partners and take action when their rules are violated by outside developers. Photos that are marked as private by users or not shared wouldn’t be available to marketers.

There are no laws forbidding publicly available photos from being analyzed in bulk, because the images were posted by the user for anyone to see and download. The U.S. Federal Trade Commission does require that websites be transparent about how they share user data with third parties, but that rule is open to interpretation, particularly as new business models arise. Authorities have charged companies that omit the scope of their data-sharing from privacy policies with misleading consumers.

‘“Our API only provides public information to a handful of partners intended to help their clients understand the performance of their content on Pinterest.”’
—Pinterest
The FTC declined to comment.

The photo sites’ privacy policies—the legal document enforced by law as promises to consumers—vary in wording but none of them clearly convey how third-party services treat user-posted photos.

For example, the privacy policy of Instagram, which is owned by Facebook Inc., directs its more than 200 million users to a separate document that explains rules for developers. Pinterest and Flickr, owned by Yahoo Inc., have no explicit mention of third-party developers in their privacy policies. Other popular sites for photos, including Twitter Inc. and another Yahoo-owned site, Tumblr, warn users they may share nonprivate content with third parties.

While Facebook is one of the largest photo-sharing sites, the fact that most of its users restrict their photos’ access with privacy controls has deterred outside developers from mining those images. Developers commonly use Facebook’s API to pull in profile photos of its members but not for marketing purposes.

An Instagram spokesman said its partnerships with developers don’t “change anything about who owns photos, or the protections we have in place to keep our community a safe place.” Flickr said it takes steps to prevent outside developers from scanning photos on its site in bulk.

Pinterest said “our API only provides public information to a handful of partners intended to help their clients understand the performance of their content on Pinterest.”

Spokeswomen for Tumblr and Twitter declined to comment.

Jules Polonetsky, the director of Future of Privacy Forum, an advocacy group funded by Facebook and other tech companies, said users should assume that companies are scanning sites for market research if their photos are publicly viewable.

But the boom in image-scanning technologies could lead to a world in which people’s offline behavior, caught in unsuspecting images, increasingly becomes fodder for more personalized forms of marketing, said Peter Eckersley, technology-projects director for the Electronic Frontier Foundation.

Moreover, the use of software to scan faces or objects in photos is so new that most sites don’t mention the technology in their privacy policies.

Advertisers such as Kraft Foods Group Inc. pay Ditto Labs to find their products’ logos in photos on Tumblr and Instagram. The Cambridge, Mass., company’s software can detect patterns in consumer behavior, such as which kinds of beverages people like to drink with macaroni and cheese, and whether or not they are smiling in those images. Ditto Labs places users into categories, such as “sports fans” and “foodies” based on the context of their images.

Kraft might use those insights to cross-promote certain products in stores or ads, or to better target customers online. David Rose, who founded Ditto Labs in 2012, said one day his image-recognition software will enable consumers to “shop” their friends’ selfies, he said. Kraft didn’t respond to a request for comment.

Ditto Labs also offers advertisers a way to target specific users based on their photos posted on Twitter, though Mr. Rose said most advertisers are reluctant to do so because users might find it “creepy.”
Mr. Rose acknowledges that most people who upload photos don’t understand they could be scanned for marketing insights. He said photo-sharing services should do more to educate users and give them finer controls over how companies like his treat photos.

Beyond image recognition, some API partners employ a process called “caching,” meaning they download photos to their own servers. One of the more common uses of caching is to build a marketing campaign around photos uploaded by users and tagged with a specific hashtag.

The companies don’t mention caching in their privacy policies and they vary in how long developers can store photos on their servers. Tumblr, for example, restricts caching to three days while Instagram says “reasonable periods.”

Some developers have already overstepped the rules set forth by photo-sharing sites. Last month, Pinterest learned from a Wall Street Journal inquiry that Piqora, one of seven partners in its business API program, launched in May, was violating its image-use policy.

Piqora, a San Mateo, Calif., marketing analytics startup, collects photos into a graphical dashboard that help companies such as clothing and accessories maker Fossil Inc. track which of its own products and those of competing brands are most popular. This violated Pinterest’s rules, which restrict partners from using images from the site that were posted by anyone except their own clients.

After Pinterest learned about the violation, the company asked Piqora to discontinue the practice and plans to begin performing regular audits of its business partners, a spokesman for Pinterest said. Fossil didn’t respond to a request for comment.

Piqora co-founder and Chief Executive Sharad Verma says he has removed the ability to view competitors’ images in the dashboard. He also clarified his company’s cached photos policy from Instagram. Rather than keeping photos for an indefinite period of time, Mr. Verna said he will now delete photos from his servers within 120 days.

“We might be looking at doing away with caching and figuring out a new way to optimize our software,” Mr. Verma said.

— Lisa Fleisher contributed to this article.

Write to Douglas MacMillan at douglas.macmillan@wsj.com and Elizabeth Dwoskin at elizabeth.dwoskin@wsj.com

http://online.wsj.com/articles/smile-marketing-firms-are-mining-your-selfies-1412882222

Project Adam by Microsoft

By | Blog | No Comments

Deep learning and artificial intelligence have the potential to change everything we know about finding and visualizing a circuit of  images and data from the real world-in vs. the real world-out. Everything has data associated with it and can tell a deeper story, this is still the story that NLP and AI masses have been telling for years, with little tangible products shipped.

We do not believe Microsoft currently has the leadership of one individual in their organization with the tenacity and vision that it will take to be able get this into the mass market.

Read more here: http://www.engadget.com/2014/07/14/microsoft-research-project-adam/?ncid=rss_truncated

Welcome to The Picture Genome Project.

By | Blog | No Comments

What?
The Picture Genome Project will develop solutions for the disbursement of profits from the Democratization of Pictures. It is also an effort to categorize pictures through the standardization of picture meta data. Our aim is to allow people to track the billions of pictures created throughout our lifetime.

Why?
The history of humanity is about exploring. Much like the Human Genome began the study to begin decoding DNA in 1990 to understand the genetic makeup of the human species, I believe that through categorization of pictures we will assemble a better understanding of the art, trends, and transcendence that is provided through them.

How?
Based off of the latest technologies and integrations of known interactions. By bringing together a think tank of industry leaders we will continually strive for new ways to integrate and standardize meta data into pictures every day. As books have their ISBN #’s, music has it’s rhythms and melodies, all great forms of art have sought out order to expand the reach and creativity that is only limited by our imagination.

Who?
We all create pictures, but this project will only move forward with the participation of leading photographers, prosumers, and business leaders.

83H

As the Human Genome Project remains one of the largest single investigative projects in modern science, I look forward to working with tech leaders to evolve our understanding of the content we all continually create.

I look forward to your comments and participation.

Posted 20th November 2012 by Andy LeSavage

The Picture Genome Project