blog

Secret Service to monitor social media in massive Super Bowl security operation

Posted on December 12, 2021 by Roger

Image 1

The Secret Service will be combing through messages on Twitter, Facebook and other social media sites Sunday as it seeks to thwart possible threats against the Super Bowl.


It’s part of a massive security operation before the big game.

 

According to the Department of Homeland Security, the Secret Service will be conducting open source monitoring of social media for “situational awareness.”

 

Officials told NextGov that they will be using social media tracking technology as they seek to discern between real and bogus threats at the Super Bowl.

 

They’ll also provide air space security, though a department spokesman told NextGov, there would be “no drones.”A spokesman for the Secret Service told the website that the agency will continuously screen social media sites, including Facebook and Al Jazeera Blog.

 

Separately, a report in SecureWorldExpo said the Federal Emergency Management Agency would be supplying roving command centers and trucks that could sustain power generation and mobile communications during a disaster at the Super Bowl.

FEMA will also provide a network of BioWatch detectors to guard against a possible biological attack.

 

Homeland Security Secretary Jeh Johnson visited site of the Super Bowl late this week for a briefing on security efforts.

My top 3 link building strategies are as follows:

Posted on November 12, 2021 by Luke

Image 2

1. Content creation + social amplification - consistently producing useful, interesting stuff that has a clear audience on social media who'd help amplify it (because it serves them in some way), is a great strategy that yields on-going returns month after month. It's a flywheel in that it's hard to turn those first few times, but gets easier and easier with time.

 

2. Thought leadership + bios & interviews - thought leaders in any field earn links simply through their day to day activities, e.g. speaking at events, being asked to contribute to interviews, going on podcasts, doing online hangouts, etc.

 

Establishing that thought leadership isn't easy, but nothing that's truly strategic in link building is. Don't underestimate the ambient power of thought leadership for links, though - it makes the process much easier.

 

3. Online tools, calculators, and interactive data - these types of resources earn links and citations like no other, and sometimes, all it takes is a single resource (think Statcounter's Global Stats, Zillow's price estimates, Walkscore's Walkscore, etc) to kickstart massive, on-going links.

5 Traffic Building Tips from Some of the World’s Most Popular Bloggers

Posted on May 12, 2021 by Scott

Image 3

I asked my favorite popular bloggers for quick and uncommon tips for building website traffic. They really came through with some priceless wisdom and tips.

 

1) “Give your very best content away.”
- Josh Hanagarne, World’s Strongest Librarian

 

2) “Find people wherever they hang out and bring them back home. If your audience likes to use YouTube then create videos, if they are businesses then check out LinkedIn, and if they are mostly consumers then put effort into engaging them in Facebook and attract them to your blog from there.”
- Chris Garrett, Chris Garrett on New Media

 

3) “Make your content unmissable. Think ‘How could I make all of my content be viewed as something that can’t be missed?’ That may mean writing when you can’t not write. It might mean killing a lot of mediocre ideas. But it’s mostly about deliberately choosing to only publish content that makes people hungry for more.”
- Jonathan Mead, Paid to Exist

 

4) “After you hit publish on your next blog post, head on over to Google blogsearch and find other people who have wrote about the same thing. If it is fairly popular, there will be quite a few blogs which have covered the same subject recently, so go to their posts and join in the conversation by leaving a comment. Anyone who clicks on your comment from that site will find very relevant content, as if it is an ‘extension’ of the site they were just reading, and they’ll probably stick around for quite a while.”
- Glen Allsopp, ViperChill

 

5) “Considering that April is just around the corner, here is an uncommon tip to generate traffic: leverage April Fools’ Day. That is right, if you pull a crazy enough prank on your blog it might go viral, and the traffic will be huge. Last year I invented a service that would let Internet users download the whole Internet to their hard drives….!”
- Daniel Scocco, Daily Blog Tips

 

Data Visualization in the Modern Age

Posted on May 10, 2021 by Bill

Image 1

With all the industry buzz surrounding data visualization and its current toolset, it can sometimes be easy to forget that the practice extends at least as far back as that of map-making. All maps attempt to convey complex data through a graphical medium, using design elements to influence, and hopefully expedite, the reader’s understanding. While such design challenges have presented themselves for centuries, it's been over the past few years that new technology and formats have expanded the scope and ubiquity of data visualization on the web.

Javascript libraries such as D3.js have made it easier and more standardized than ever to create rich, interactive data graphics for the web, and users have come to expect a higher than ever level of engagement, creativity and interactivity. As a result, the power to set the conversation, in many cases, has shifted into the hands of data scientists and data journalists. Kennedy Elliott, a visual journalist from the Washington Post, said during her OpenVis talk that, for those working in data visualization, "It's a really good time to start thinking out of the box.


We're focusing on ways to empower clients to explore their data in meaningful and unbiased ways.

It's true that there is a rapidly materializing landscape for data visualization best practices and conventions, both in terms of tools and techniques and in terms of how to responsibly and accurately convey information. One theme that came up often at the conference was the dichotomy between "exploratory" infographics and "explanatory" ones. Many of the speakers (such as Jen Christiansen, Kennedy Elliot, Lisa Strausfeld, and Christopher Cannon) come from media fields, and visualize smaller, pre-determined data sets that serve the purpose of clarifying a specific trend or point. Others (such as Facebook's Jason Sundram and Mapbox's Eric Fischer) visualize dynamic data of unpredictable scale and meaning, thus inviting the user to reach her own conclusions via exploration.

 

We're focusing on ways to empower clients to explore their data in meaningful and unbiased ways. Our clients' data is so variable and unpredictable, and each use case so unique, that we make sure to think outside the box to create the best possible visual tools. The goal here is to help our clients connect the dots between audience affinities and onsite behavior, and oftentimes the best way to do this is by providing an intuitive, visual platform that they can customize to suit their own purposes.

The New Vocabulary for Describing Big Data dimensions

Posted on Febuary 14, 2021 by Lyle

Image 1

12 years ago Doug Laney listed the 3 dimensions of Data management in a Gartner (then Meta Group) research: Volume, Variety, Velocity.

 

Nowadays, the evolution of Data Management also refers to Big Data.

In order to describe it, Gartner added a C to the 3Vs: Volume, Variety, Velocity, Complexity. Forrester added Variability. Is Variability like Complexity? McKinsey Glogal Institute added Value. Recently, Gartner has introduced 12 dimensions for Big Data grouped into three tiers.


First question is: Does Vs make sense? Do we need Vs?

By rephrasing this post I can say that “IT industry simply loves acronyms … As acronyms go, Vs isn’t as bad as it could be. If they help do describe the big data problem, let’s go with Vs.”

 

Briefly, now we have 5 Vs: Volume, Variety, Velocity, Variability (and/or Complexity) and Value:

 

Volume: large amount of data
Variety: different data formats
Velocity: continuous data streams and record creation
Variability and/or Complexity: different meanings and complexity of data types
Value: extract meaning from information.


Value is the key V. You may argue it’s not a big data dimension, but Value refers to the reason for using big data. It refers to the business case.

 

What’s the business case? This is the first point outlined by Forrester’s Evin Welsh when asking the key questions on big data

 

In a post by Stephen Sawyer , you can find the following quote, confirming that Value is the missing V to the classical Vs: “Big data is a lot more interesting when you bring in ‘V’ for value. Does new data enable an organization to get more value, and are we doing enough to get to that value quickly?”

 

To address the big data issue, IT Industry must handle business cases and business data coming from users’ needs, as well as give the right results and proper solutions by managing the various big data dimensions.

Google No Longer Accepting "multi" for the Color Field

Posted on January 21, 2021 by Elliot

Image 1


Google is no longer accepting “multi” as a value for the “color” field in the shopping feed. As of now, Google is only throwing a warning, but it is possible that this warning will turn into an error in the future. In order to prevent future feed errors, we recommend that you implement a strategy to replace “multi” values.

 

What options do you have if color values are missing from your product data?

 

Using Swans Wharf’s customization tool, there’s a seemingly endless variety of “multi” replace options. Below is the most common (and the recommended) process to replace field values for product color values.

 

To add color options to your store:

Create a merge file with your unique ID field and associated color for each product

Use Swans Wharf to create customization rules in Step 4 of the feed based on another field in your data. For example, if you have color in your product title you can create rules. A separate rule would need to be created for each color.


Bing set to Recieve Image Data Feeds

Posted on December 12, 2020 by Elliot

Image 1

From a recent Bing Blog:

Bing is able to pull data directly from your servers. We accept 2 types of feeds for simplicity:

Direct data file: Object metadata is tied to page and image URLs and presented in data files. This type may be a sitemap or a more robust data feed.


2-step data sharing: A sitemap is first shared to list all the page URLs that contain images. Bing then uses the URLs to query a specified metadata endpoint (OEmbed, RSS, etc) to retrieve comprehensive metadata.

We offer flexibility on feed and schema formats:

Format: JSON (preferred), XML
Schemas:
Bing Image (preferred)
Google Product Feed
Google Image Sitemap
Yahoo MediaRSS
Pinterest Rich Pins

The Bing Image data feed schema is based on schema.org. The more metadata provided in the data feed, the richer the experience we can enable.

 

How do I start?

Please contact us to discuss more details such as scheduling, metadata richness, validation, and more.

 

As a tip, when creating a sitemap, adding an indication of importance can be very helpful. For example, a webmaster can split sitemaps by activity where heavily trafficked URLs are included in frequently updated sitemap1 while less active URLs are in a separate sitemap2. This helps Bing understand which URLs it should prioritize crawling first.

 

We will continue to evolve our data feed ingestion system. We value your feedback on how we can make this process easier and use your content to drive high quality referral traffic. We look forward to hearing from you.

Datafeed Best Practices

Posted on August 2, 2021 by Roger

Image 1

If you work as an Affiliate Manager for a retailer, you’ve probably seen the word “Datafeed” before, and may be wondering … What is that!?

Datafeeds account for a full 5-10% of all conversions tracked on the Internet… which means that if you aren’t providing your Affiliates a detail-rich datafeed with which to work, you are leaving sales on the table.

So what is a Product Datafeed?

The easiest way to explain it is that a datafeed is just a giant excel spreadsheet with line-by-line product descriptions and data (such as price, image, category).

But, why are they so important? For 2 main reason:

1. They allow Affiliates to target specific products when sending you traffic. If a blog features a post about washing machines, for example, you want them to link directly to the washing machines on your retail site… not a generic home page. Conversions go up!

 

2. You get listed in a bunch of AF marketing tools that Affiliates use – namely our “Make A Page“, “Widgets“, “Videos” – and you are more easily found by potential Affiliates who search for programs based on products as opposed to Merchants.

 

Datafeeds are critical to the overall success of a retail based Affiliate Program. They allow Affiliates to market specific products or sub-sets of products with up to date and rich data.

Why do I need a datafeed?

Posted on May 2, 2020 by Bill

Image 1

A Datafeed is just a line by line listing of your products. That’s it. Think of it like an Excel spreadsheet of your inventory, with columns for things like price, an image, where to find it on your site, etc…

 

In a nutshell, there are six mandatory feeds for online merchants:

 

SKU - must be unique, duplicate SKU's will be removed
URL - link
Price - msrp
Category - must correspond to the ShareASale numeric options listed below the datafeed specifications table
Subcateogry - must correspond to the ShareASale numeric options listed below the datafeed specifications table
MerchantID - your unique identifier

 

Although there are only 6 mandatory columns where data must be populated, all columns must be represented in the file and in the specified order. So, even if a column allows a Null value, the column must be included in your CSV file and can be treated as blank space if you choose to not enter any data in the field.

 

Even though there are only six required fields, there are certainly an abundance of optional fields that I would strongly encourage utilizing. The more information you can provide to the Affiliates about the product, the more flexibility they have for marketing and promoting. Some of my personal preferences/favorites include:

 

Name
URL to Image
URL to Thumbnail Image
Description
Search Terms
Status
Manufacturer
Merchant Category
Bestseller

 

Once your CSV is complete, you will compress the file to .zip or .gz archive and send it to us from the "Upload A New Datafeed" option. You also have the option to automatically upload the datafeed via FTP access. If you would like to set up FTP access, submit at “Ticket” from inside your account that includes the static IP address from where you will be uploading the feed.

Making News Feed nearly 50% faster on iOS - from the Facebook Engineering Team

Posted on March 4, 2020 by Roger

Image 1

From the Facebook Engineering Team -

 

Our engineering team spends a lot of time and effort making News Feed reliable, simple, and fast. Almost two years ago, we switched the Facebook iOS app from HTML5 to native iOS code to optimize performance.

 

Our work didn't stop with News Feed. We wanted to bring the same speed to other parts of the app, so in later updates we also introduced native rewrites for Timeline, Groups, Pages, Search, and more. But we began to notice something curious – News Feed was getting slower with each release. With each update it would take a tiny bit longer for News Feed to load, and it began to add up.

 

What was going on? To figure it out, we added instrumentation to each step in the process of loading News Feed — network, parsing, data processing, layout calculations, and view creation. What we found surprised us — the problem was in our data model layer. With each passing release, the time it took to create and query model objects was longer and longer. Only turning to a brand new model layer would solve the slowdown.

 

Data models on iOS
First, let’s talk about how News Feed was designed to work on iOS. The Facebook APIs we use serve as a JSON representation of the stories in your News Feed. Because we didn't want UIViews to consume JSON directly — there are no type safety or hints about what fields you can expect to get from the server — we create intermediate data models from JSON and used those to power the user interface. Like most iOS apps, we chose to use the system default framework for managing data models: Core Data.

 

Already built into iOS and very well documented, it allowed us to get the native rewrite out the door without reinventing the wheel.

Returning to our performance problems, though, we found that Core Data had a quirk. As we ported more features, our Core Data database slowed down. We started with only a few dozen entities in Core Data, but this had ballooned to hundreds. Some of those entities had a lot of fields — Pages, for example, had more than 100!

 

Under the hood, Apple’s Core Data framework uses SQLite to store data. As we added more entities and more fields, the number of tables, columns, and indexes grew. Core Data stores data in a fully normalized format, so each time we encountered a Pages object in JSON, we would have to perform a fetch-or-create in Core Data and then update the page. Saving would touch dozens of indexes in SQLite, thanks to an enormous number of relationships (i.e., how many things reference people or Pages objects on Facebook).

 

We realized that while Core Data had served us well in the beginning, we needed to go without some of its features to accommodate our scale. We set about replacing it with our own solution, resulting in News Feed performing nearly 50% faster on iOS.


A New Model Layer.
Core Data is at heart an object-relational mapper (ORM). It provides features like full normalization and synchronous consistency across multiple isolated contexts.

But since the Facebook app is essentially a cache for data that lives on the server, a completely normalized representation of data wasn't needed. All of those fetch-or-creates while parsing JSON objects were resource-intensive and unnecessary. When data is downloaded from a Facebook server, it's already as up-to-date as it can be.

We sought a system that was consistent — if someone likes a post on one screen, other screens should update accordingly — yet we balanced that by settling for asynchronous eventual consistency, rather than the synchronous consistency guaranteed by Core Data. In Objective-C parlance, we wanted the ability to "dispatch_async" the consistency operations on our object graph.

 

We developed our own bare-bones form of model objects guided by three principles:

 

Immutability.
In this new data layer, models are completely immutable after creation. To modify even a single field, a developer must create an entirely new model object. This might seem crazy at first, but since you can't modify the object, there's no need for locks; thread safety becomes trivial. This also allows us to write code in a dataflow (or "functional reactive") pattern, which we've found reduces programmer error and makes code clearer.

 

Denormalized Storage.
To serialize these models to disk, we chose to use NSCoding. With each part of the app assigned its own cache, there is no longer contention for the single Core Data store shared by the entire app. It also ensures that products that don't want to cache to disk don't have to.


Asynchronous, Opt-In Consistency. By default, there are no consistency guarantees. By making consistency opt-in instead of opt-out, we were able to ensure that database indexes are not used in situations where consistency is unnecessary. To opt-in, a developer passes a model to a consistency controller; when it detects that a consistent field has changed inside the model, it hands the developer a new model with those updates. Behind the scenes, this consistency controller uses a GCD background queue to compute these updates, ensuring we never block the main thread.


Taking a cue from "POJOs" in Java, we refer to these objects as "PONSOs,” or plain ol' NSObjects.

 

Porting Feed.
After creating our own model objects, one major roadblock remained. News Feed had been written with the assumption that it would be rendered using a Core Data model, but now we might have only had an equivalent PONSO. However, we didn't want to rewrite all of News Feed to use only PONSOs since we wanted to A/B test the rollout of these new model objects.

 

The solution was a clever use of protocols. For each type of object, we used a script to code-gen a protocol that represented a model-agnostic interface. Both the Core Data object and the PONSOs adopted this protocol, and we migrated News Feed code bit-by-bit to use these new protocols instead of hard-coded references to Core Data classes. When the last hard-coded Core Data reference was migrated, we were ready to launch.

 

The Launch.
We used our Airlock system to gradually introduce this new version of News Feed to the public. This framework helped us verify that iOS News Feed was nearly twice as fast under the new framework. Our work won't stop there, of course—we've got more improvements coming, so look forward to an even snappier News Feed soon!


There we have it! You now know the basic building blocks for building a product datafeed. It's important to note that many times, the data for the feed can be exported depending on the product database you, the Merchant, may be using. So that is something to look into in order to save the time and resources of building the file manually from scratch.

What are web feeds?

Posted on November 1, 2019 by Bill

Image 1

You may be wondering, “What are web feeds?” Put simply, web feeds allow you to display dynamic content from a webpage directly in your emails. So, if you have newsletters that link to areas of your site that are frequently updated, web feeds eliminate the need to manually update your email’s content, too. They’re a great tool, so we want to give you a little bit more information on how to use them.

 

What do web feeds do?

Anyone who consistently sends campaigns that are meant to reflect updates online can benefit from using web feeds. Instead of cloning campaigns and manually updating the content for each newsletter, web feeds allow you to simply clone the campaigns and send them as is – the content will update automatically, so long as you’ve updated the webpage the feed is linked to. This can make sending newsletters or other regularly sent campaigns much faster and easier.

How do web feeds work?

 

Web feeds fetch data from the URL you specify, so they’re updated in real-time and content can vary depending on when a recipient opens the email. Web feeds fetch JSON or XML data, and then automatically display the linked content in your emails. We recommend using JSON format, since Klaviyo will convert XML data to JSON before use anyway.

 

One thing to bear in mind when creating a web feed is that the URL you plug in must be in JSON or XML format, otherwise the feed will not display properly. When selecting the request method, you can choose from “post” or “get.” When you choose “post,” you’re requesting that the URL post the form data you submit. When you choose “get,” you’re requesting data directly from an HTTP resource. We strongly recommend using the “get” method.


Once you’ve configured your web feed in the Data Feeds tab, you can start inserting the feed into your campaigns. In the content editor, to add a web feed to your email you simply click “Data Feeds” button at the bottom of the page and select the web feed you would like to include. You can then call out your feed using the {{ feeds }} tag.

 

Why use web feeds?

Web feeds are super useful for ecommerce stores that update their websites daily, or have a specific webpage dedicated to flash sales. While use cases may vary by industry, web feeds are applicable to a wide range of purposes. For example, the food delivery service Munchery uses web feeds to reflect daily changes in menu options, and organizes their web feeds by day of the week and meal (lunch vs. dinner). And, since they operate in different cities across the US, their customers receive different meal options based on their locations.

 

In your web feeds, you can include as much or as little information as you’d like, so long as it corresponds with information on your site. You can also use up to five different feeds per email, which gives you a lot of flexibility when it comes to what external content you’d like to include.

 

Say you’re a retail store that has daily flash sales, and you’d like to display several items included in your sale in your newsletter. You can set your web feed to include the name of each item, the price, an image, and a short description in an ordered list. Or, you can choose to only display the name of the items and an image – it’s completely customizable per your preferences. Instead of reworking flash sale campaigns each day, now you can just clone them and they’ll be ready to send.


Web feeds require a bit more tech savvy than web elements, but if you find yourself constantly sending campaigns that correspond with updates to your website, they will make this process much easier. While you may need to initially invest more time to get them set up, they can save you a ton of time in the long run.

Where to find Open Data Feeds

Posted on September 18, 2019 by Emil

Image 1

Finding an interesting data set and a story it tells can be the most difficult part of producing an infographic or data visualization.

 

Data visualization is the end artifact, but it involves multiple steps – finding reliable data, getting the data in the right format, cleaning it up (an often underestimated step in the amount of time it takes!) and then finding the story you will eventually visualize.

 

Following is a list useful resources for finding data. Your needs will vary from one project to another, but this list is a great place to start — and bookmark.

 

1. Government and political data

Data.gov: This is the go-to resource for government-related data. It claims to have up to 400,000 data sets, both raw data and geo spatial, in a variety of formats.

 

The only caveat in using the data sets is you have to make sure you clean them, since many have missing values and characters.

 

Socrata is another good place to explore government-related data. One great thing about Socrata is they have some visualization tools that make exploring the data easier.

 

City-specific government data: Some cities have their own data portals setup to browse through city-related data. For example, at San Francisco Data you can browse through everything from crime statistics to parking spot available in the city.

 

The UN and UN-related sites like UNICEF and the World Health Organization are rich with all kinds of data, from mortality rates to world hunger statistics.

 

The Census Bureau houses a ton of information about our lives around income, race, education, population and business.

 

2. Data aggregators

These are the places that house data from all kinds of sources. Sometimes it’s easier to find something here related to a specific category.

 

Programmable Web: A really useful resource to explore API’s and also mashups of different API’s.

Infochimps have a data marketplace that offers thousands of public and propietary data sets for download and API access, in a wide range of categories, from historical Twitter and OK Cupid data, to geo locations data, in different formats. You can even upload you own data if you like.

 

Data Market is a good place to explore data related to economics, healthcare, food and agriculture, and the automotive industry.

 

Google Public data explorer houses a lot of data from world development indicators, OECD and human development indicators, mostly related to economics data and the world.

 

Junar is a great data scraping service that also houses data feeds.

 

Buzzdata is a social data sharing service that allows you to upload your own data and connect and follow others who are uploading their own data.

 

3. Social data

Usually, the best place to get social data for an API is the site itself: Instagram, GetGlue, Foursquare, pretty much all social media sites have their own API’s. Here are more details on the most popular ones.

 

Twitter: Access to the Twitter API for historical uses is fairly limited, to 3200 tweets. For more, check out PeopleBrowsr, Gnip (also offers historical access to the WP Automattic data feed), DataSift, Infochimps, Topsy.

Foursquare: They have their own API and you can get it through Infochimps, as well.

Facebook: The Facebook graph API is the best resource for Facebook.

Face.com: A great tool for facial recognition data.

 

4. Weather data

Wunderground has detailed weather information and also let’s you search historical data by zip code or city. It gives temperature, wind, precipitation and hourly observations for that day.

 

Weatherbase has detailed weather stats on temperature, rain and humidity of nearly 27,000 cities.

 

5. Sports data

These three sites have comprehensive information on teams, players coaches and leaders by season.

Football

Baseball

Basketball

ESPN recently came up with its own API, too. You have to be a partner to get access to their data.

 

6. Universities and research

Searching the work of academics who specialize in a particular area is always a great place to find some interesting data.

 

If you come across specific data that you would like to use, say, in a research paper, the best way to go is to contact the professor directly. (That is how we got the data for our What are the Odds piece, which is one of the most-viewed infographics on the web.)

 

One university that makes some of the datasets used in its courses publicly available is UCLA.

 

7. News data

The New York Times has a great API and a really good explorer to access any article in the publication. The data is returned in json format.

 

The Guardian Data Blog regularly posts visualizations and makes data available through a Google docs format. The great thing about this is that that the data has already been cleaned.

Update on Data.gov

Posted on December 12, 2018 by Roger

Image 1

The United States is the only western country without a centralized data office. Instead, official statistics are produced by well over 100 agencies. This makes obtaining official US data difficult, and that’s somewhat of a paradox because in most cases, these data are public and free. Of course, with data coming from so many sources, they are also in a variety of shapes and sizes. Says Wired,

Until now, the US government’s default position has been: If you can’t keep data secret, at least hide it on one of 24,000 federal Web sites, preferably in an incompatible or obsolete format.

 

A commitment made by the Obama administration was to tackle this and make data more widely available. To that end, a data portal was announced in early April and data.gov was officially launched end of May.

 

Data.gov is three things in one.

A sign that this administration wants to make the data more accessible, especially to developers.

A shift towards open formats, such as XML.

A catalogue of datasets published by US government agencies.

 

The rationale is that with data.gov, data are available to wider audiences. There’s a fallacy in that, because the layperson cannot do much with an ESRI file. But hopefully, someone will and may build something out of it for the good of the community.

The aspect I found most interesting is the catalogue proper. For each indexed dataset, data.gov builds an abstract, inspired by the Dublin-Core Metadata Initiative, with fields such as authoring agency, keywords, units, and the like. This, in itself, is not a technological breakthrough but imagine if all the datasets produced by all the agencies were described in such a uniform fashion. Then, retrieving data would be a breeze.

 

Note that data.gov does not store the datasets. They provide a store-front which then redirects users to the proper location once a dataset has been selected.

 

There have been other, similar initiatives. Fedstats.gov, allegedly, provided a link to every statistical item produced by the federal government. By their own admission, the home page was last updated in 2007, and its overall design hasn’t changed much since its launch by the Clinton administration in 1997 (a laudable effort at the time). Another initiative, http://usgovxml.com, is a private portal to all data available in XML format.

 

So, back to “find > access > process > present > share”. Where does data.gov fall?

 

It can come as a surprise that they don’t touch the last 3 steps. Well, it certainly will be a surprise for anyone expecting the government to open a user-centric, one-stop-shop for data. Data.gov is certainly not a destination website for lay audiences.

It doesn’t host the data either, however, its existence drives agencies to publish their datasets in compliance with its standards. So we can say that it indirectly addresses access.

 

So what it really is about is finding data. Currently, the site has two services to direct users to a dataset: a search engine and a catalogue. The browsable catalogue has only one layer of hierarchy, and while this is fine with their initial volume (47 datasets, around 200 as of end of June) that won’t suffice if their ambition is to host 100,000 federal data feeds.

 

All in all, it could be argued that data.gov doesn’t do much by itself. But what is interesting is what it enables others to do.

On the longer term, it will drive all agencies to publish their data under one publication standard. And if you have 100,000 datasets published under that standard, and if people use it to find them, then we will have a de facto industry standard to describe data. The consequences of that cannot be overestimated.

 

The other not obvious long-term advantage is what it will allow developer to create. There are virtually no technical barriers to creating interesting applications on top of these datasets. Chances are that some of these applications could change our daily lives. And they will be invented not by the government, but by individuals, researchers or entrepreneurs. quite something to be looking forward to.

Data Feed Marketing

Posted on October 22, 2018 by Bill

Image 1

Data feeds are designed to be used by online marketing agencies and online shops in general. One of the most expressed benefits of using data feeds as a form of advertising is the possibility to reach a massive amount of online shoppers. We are living in times where the internet is vastly used as it provides us with many conveniences such as shopping for all kind of products without getting up from the couch. Due to this reason the online shoppers’ community is continuously growing which means that your customer base is growing as well.

 

Another advantage of using data feeds is the fact that you can tailor and position your products in regard to the customer segment you want to target. This makes advertising much more efficient and effective. A significant advantage is that you have the option to select the type of way you want to export your products which give you additional flexibility.

Last but not least, having a data feed provides you with quite a few different online advertising options. You can export your products to a lot of different channels which will increase traffic through your website and consecutively sales. Some of the different channels that you can export to include:

 

Comparison shopping engines like Google Shopping, Pricegrabber, Pricerunner, Beslist and many more. These websites are a great way to give exposure to your products. There is a lot of traffic going through them and it’s almost a guarantee you will increase your sales.

 

Marketplaces such as Amazon, eBay, or Bol.com (in the Netherlands). These websites usually have huge amounts of traffic but also millions of different products.

 

Affiliate platforms, which is an indirect way to advertise. Some of the platforms include Daisycon, Zanox and TradeTracker, who also manage your PPC (Pay Per Click). PPC is the method of payment some platforms use and it means that you would have to pay every time someone clicks on your advertisement.

 

Social media websites such as Facebook. This is a great way to create tailored advertisement to reach a specific target group.

AdWords, using this method you can make sure that your ads will be displayed only when customers are searching for your products. This way you can eliminate unnecessary clicks and increase your conversion rate.

Super Simple way to Monitize Your Website

Posted on August 12, 2018 by Roger

Image 1

Website monetization is a task that every website owner should be focusing on, as times change it is critical to stay up to date on all the various methods one can use to increase site earnings. One of the best ways to make money with a site online is to sell products for other people, there are numerous ways to do this but the best in my opinion is to use a datafeed that is provided by a retailer which allows you to include their products directly on your site with limited work on your part.

 

What is a datafeed?
A merchant datafeed is a file that contains all the information for products a merchant wants to make available to affiliates in order for them to sell their products. The file format has been standardized so implementation is consistent and easy to maintain.


How to use a datafeed.

Datafeeds need to be processed in order to display the included products correctly, the datafeed will typically contain product information such as description, images, options and price. There are manual ways to use this data on any site but for ease of use and efficiency I recommend using a datafeed plugin so all the heavy lifting is done for you. By setting up your WordPress based site using a datafeed plugin you can install, setup and populate a store section on your site very easily and maintain it effectively without needing a degree to figure out how to manage it, my personal favorite is called DateWedge.

 

What is DataWedge?
A DataWedgeis a plugin that allows you to create and embed an affiliate datafeed store into your WordPress blog easily. DataWedge is a powerful, push-button store creation system that makes it fast and east for affiliate marketers to set-up, manage and update a complete affiliate datafeed store without touching any confusing or complicated data feed files, learning how to program php or hiring an expensive programmer to do the work for you.


Why use DataWedge?
DataWedge works with WordPress, the most popular content management system in use today and is designed with ease of use in mind allowing even a low budget site to have a professional and effective data feed powered store easily. DataWedge currently aggregates over 130 million affiliate products from 19 different affiliate networks including Commission Junction, Clickbank, LinkShare, Google Affiliate Network, Pepperjam Network, Digital River, Sharasale, Buyat and others.
DataWedge key features:
No sign up fees
No data feed access fees.
Keep 100% of your commissions.
Works with almost any WordPress theme.
Create unlimited affiliate data feed stores.
Automatically create blog posts from your product list (drip functionality)
Embed products into blog posts or pages.
Pick products from multiple merchants and multiple networks.
Include up to 100,000 products per store.
Built-in breadcrumb menu for easy navigation.
Search engine friendly URL’s
Create new pages based on keywords.
Choose what to display on each page (product name, thumbnail, description, price, etc…)
Edit category names, images, descriptions and thumbnails.
Choose to redirect your affiliate links immediately to merchant or first to a local product information page.
Show similar products on product details pages.
Display blog or store on your site’s front page.
This is just a short list of some of the key features and benefits, for a complete listing check out this great data feed store plugin here!