Structured Data: SEO Mythbusting – What Google Wants

Structured data formats are rules that standardize the structure and content of a webpage.

Google is asking all of us to surface structured data to their crawlers by marking up our HTML with RDFa and Microformats.

Google’s John Mueller made it clear that Google preferred JSON-LD structured data.

Wow unless you are super technical this is all mumbo jumbo. “schema markup” and “structured data” WTH….

It sounds and looks complicated, but it is something anyone can learn to do.

So what does this mean and why should you care?

Basically Google wants this, and if you want your site to rank somewhere inside hte first 10 pages then, you has better do what Google wants.

After all just doing this can give you a significant SEO boost and also increase your rankings.

Most people simply put human readable dat on their site – this looks great but it makes it harder for Google to find and crawl.

This markup makes it easier for Google to know what hte page is about without guessing

Check out this code direct from Google

You would do that by using this markup:

So for example you have a receipe page 

The markup you could use is


    <title>Party Coffee Cake</title>
    <script type="application/ld+json">
      "@context": "",
      "@type": "Recipe",
      "name": "Party Coffee Cake",
      "author": {
        "@type": "Person",
        "name": "Mary Stone"
      "datePublished": "2018-03-10",
      "description": "This coffee cake is awesome and perfect for parties.",
      "prepTime": "PT20M"
  <h2>Party coffee cake recipe</h2>
    This coffee cake is awesome and perfect for parties.



This looks complicated so to help you out Google has created the Structured Data Mark-up Helper so web masters can add schema mark-up to their sites easier.



SEO changes rapidly, what ranked a site quickly one day may not work the next. Especially if these are blackhat methods.

Google recognizes this so have put together a channel to help webmasters find out What Google Wants.

One of these channel that Google has created is called SEO Mythbusting. 
Below is a video from this series
Under this video is the transcript from the video so you can follow it if needed.
In this bonus material from the filming of last week’s episode (Googlebot: SEO Mythbusting), Martin Splitt (WebMaster Trends Analyst, Google) and his guest Suz Hinton (Cloud Developer Advocate, Microsoft) dive into the topic of “new microformats”: structured data!
Documentation mentioned in this episode:
Intro to structured data →
Overview of supported structured data in Google Search →
Structured data testing tool →
Rich results rest →
Rich result status reports →



is one term that I’m

going to mention
to you just based

on this is the reason why
I had to submit the URL

to be re-indexed.

And that’s microformats.


All right.

we talk about–

are they still a thing?

I haven’t really had to do
a lot of SEO optimization

for a while.

And I knew microformats
was such a huge thing

because let’s say you’ve
got a product page

and it has reviews
on it and you want

to show the little stars and all
of that kind of rich content.

And every time I made a
tweak and we deployed,

I would have to then submit
to get re-crawled and see

if the results got more written.

And that was definitely a
very slow feedback cycle.


what is the state?

Is microformat still a thing?

And are there better resources
out there right now for us

to be able to pull
that rich content out?

going to be very happy.

And we have much better things.


are still a thing.

But they are now
called structured data.

SUZ HINTON: Structured data.

are using JSON-LD, so

JSON for Linked Data.

SUZ HINTON: Yeah, this
is all new terms to me.


And you probably used
literally the microdata

attributes in HTML.


Yep, exactly.

Yeah, we were using them.

And they were very hit and miss.


SUZ HINTON: It was very easy
to just mess up one tiny thing.

And the validator
didn’t catch it.

And then the stars
would disappear.

And we’d be like [GASP].

have moved on from there.

SUZ HINTON: OK, that’s good.

MARTIN SPLITT: So there is now– is an open source

organization where people can
submit or discuss or change

or do stuff with
the semantic data

that they want to
put on the web.


that’s participating–

there is much more
semantic data out there

than we are supporting
in search results.

But a bunch of it is supported
in the search results.

So for instance, if you
have an event that we want

to have showing up
with the location

and if you can get
tickets and who

is the performer and
all that kind of stuff–

if you have a recipe
where you might

have an image or the
instructions on how to make it

or the time it takes
to make it and reviews,

how nice this recipe might
be, articles, books, and TV

series, all sorts of things,
we have documentation on that

specifically as well.

If you go to,

you find all the
supported types.

And they show up nicely
in the search results.

So you get a little
preview picture.

And then you get the stars
and all that kind of stuff.

SUZ HINTON: Oh, this
would have been amazing.

MARTIN SPLITT: It’s fantastic.

And it’s JSON.

is so much easier.

script tags with JSON in it.

It’s so much easier.

SUZ HINTON: It’s just not
little meta attribute things?

MARTIN SPLITT: Correct, yes.

So you have your JSON block.

And we have what’s called the
Structured Data Testing Tool.

That is a little dated by now.

But it supports– generally,
basically everything

that we know of shows up
as either valid or invalid.

And then we have the
Rich Results Test,

because the Structured Data
Test, while being very generic,

is also not very specific
to what you want to achieve.

You want to probably achieve
the nice little stars showing up

in the search results.

This is what we
call rich results.

And there’s the Rich
Results Test for it.

And that even
gives you a preview

of how that might look
like in the search results.

There’s no guarantee
that it does

look like that in
the search results

because people have been
using it to spam stuff, like–

SUZ HINTON: Yeah, true.

a bazillion reviews.

And then we’re
like, yeah, you just

have some JavaScript
generating fake reviews.

That’s not really–

SUZ HINTON: Well, how do
you actually use the tool?

Because I remember you used to
have to dump your entire HTML

file in there.

[? don’t. ?] [INAUDIBLE]

SUZ HINTON: And if you
did it too many times,

you got timed out.



doesn’t happen anymore.


That’s pretty exciting.

you have two options.

You can dump a URL
in it, which is nice.

And you can even use ngrok or
something if you have a local–

could do local host?


this is very fancy.

you can even also

still do like you dump
your HTML in there.

We execute the JavaScript.

So if you’re using JavaScript
within that code dump,

that’s fine.

SUZ HINTON: Oh, wonderful.

you’re running it– yes.

And you can basically
live debug as you type.

You press a button and
it goes like, nope.

And you’re like, oh, damn it.

And you get the feedback here.

And it’s like, missing
performer for your event.

And I’m like, OK, sorry, sorry.

And you write it in.

And then it reruns it.

And you’re like, OK, cool.

This is what I want.

And I can take it
back to [INAUDIBLE]

SUZ HINTON: That is awesome.

yeah, we have that tool.

We have Search
Console that gives you

a live view of what
happens on your page,

also for structured data.

Yeah, microdata is not
that much of a thing.

But the structured data
is still going strong.

SUZ HINTON: Well, it sounds
like it’s come a long way.

That’s very exciting.


SUZ HINTON: If I’m ever
working for a large retailer

ever again, then I
feel like I got this.

MARTIN SPLITT: If you have a
blog, add the article markup.

You might get [INAUDIBLE]


I’m going to look at
the schema for that.

That would be like
author and stuff.

MARTIN SPLITT: And other sources
might pull the data as well,


It’s an open source format.

So theoretically,
voice assistance

could use it as well.

So just imagine if
you have a recipe blog

and then you stand
in the kitchen,

go like, hey, assistant
thing– whatever

it is, whatever company
you’re choosing.

There’s a variety of
options these days, right?

And then the thing goes like
yeah, Martin’s apple pie.

First step– take some
apples and peel them.

And you’re like oh,
OK, fair enough.

That can come from the
structured data as well.

So that’s pretty cool.

SUZ HINTON: That is really cool.

I didn’t even think
of those use cases.

I just always thought
about search results.




Googlebot: SEO Mythbusting – What Google Wants

Google’s main crawler is called Googlebot .

Googlebot retrieves the content of webpages (the words, code and resources that make up the webpage).

It then sends the information to Google.

Google uses this information in its Google search engine to determine what sites to display and to whom.

  • There are more than 3.5 billion Google searches every day
  • 76% of all global searches take place on Google
  • Google Search Index contains more than 100,000,000 GB
  • More than 60% of Google searches come from mobile devices
  • 16-20% of all annual Google search results are new
Google has put together a channel called SEO Mythbusting. This helps webmasters find out What Google Wants.

Martin Splitt (WebMaster Trends Analyst, Google) and his guest Suz Hinton (Cloud Developer Advocate, Microsoft) discuss the many intricacies of Googlebot such as:

What is – and what is not – Googlebot (crawling, indexing, ranking) (1:02)
Does Googlebot behave like a web browser? (3:33)
How often does Googlebot crawl, how much does it crawl, and how much can a server bear? (
Crawlers & JavaScript-based websites (
How do you tell that it’s Googlebot visiting your site? (
The difference between mobile-first indexing and mobile friendliness (
Quality indicators for ranking (

Below this is subtitles for the video





SUZ HINTON: A lot of
confusion revolves around SEO

because no one understands how
the Googlebot actually works.



welcome to another episode

of “SEO Mythbusting.”

With me today is Suz
Hinton from Microsoft.

Suz, what do you do at work,
and what is your experience

with front end SEO?

so right now, I’m

doing less front end these days.

I focus more on IoT.

time you were a front end


SUZ HINTON: Yeah, I was a front
end developer for, I think,

12 or 13 years.

And so I got to work on lots of
different contexts of front end

development, different web
sites, things like that.


I wanted to just

address a bunch of stuff
about Googlebot specifically,

and nerd out about
Googlebot, because that

was the side of things that
I was the most confused about

at the time.

is basically a program

that we run that
does three things.

The first thing is it
crawls, then it indexes,

and then last, but
not least, there’s

another thing that is not
really Googlebot anymore.

That is the ranking bit.

So we have to basically grab
the content from the internet,

and then we have to figure out
what is this content about?

What is the stuff that
we can put out to users

looking for these things?

And then last, but
not least, is which

of the many things that
we picked for the index

is the best thing for
this particular query

in this particular time?

SUZ HINTON: Got it, yeah.

ranking bit, the last bit,

where we move things around–
that is informed by Googlebot,

but it’s not part of Googlebot.

because there’s

this bit in the
middle, the indexing?

The Googlebot is
responsible for the indexing

and making sure that content is
useful for the ranking engine


Absolutely, absolutely.

You can imagine, someone
has to– in the library,

someone has to figure out
what the books are about

and get the index of the bits
in a catalog, the catalog

being our index, really.

And then someone else
is using that index

to make informed
decisions and going, here,

this book is what
you’re looking for.

really glad you used

that analogy because I worked
in a library for four years.

MARTIN SPLITT: So you know much
better than I how that works.

was that person.

People would be like, I
want Italian cookbooks,

and I’m like, well,
it’s 641.5495.

And you would just
give it to them.

come to you, as a librarian,

and ask a very
specific question,

like so what is the best book on
making apple pies really quick,

would you be able to figure
out, from the index–

you probably have
lots of cookbooks.

SUZ HINTON: We did, yeah.

We had a lot.

But given that I also put lots
of books back on the shelf,

I knew which ones were popular.

I’ve no idea if we can link
this back to Googlebot.


Yeah, it’s pretty much– so you
have the index that probably

doesn’t really change that much,
unless you add new books to it.

SUZ HINTON: New editions.

MARTIN SPLITT: Exactly, yeah.

So you have this index, which
Googlebot provides you with.

But then we have the second–

the librarian second
part that basically is,

based on how the interactions
with the index work,

figure out which
books to recommend

to someone asking for it.

So that’s pretty much
the exact same thing.

Someone figures out what
goes into the catalog,

and then someone uses it.

SUZ HINTON: I love this.

This makes total sense to me.

that’s still not necessarily

all the answers you need.

SUZ HINTON: Yeah, I just want to
know, what does it actually do?

How often does it crawl sites?

What does it do
when it gets there?

What does it– how is it
generally behaving like?

Does it behave
like a web browser?

a really good question.

Generally speaking, it behaves
a little bit like a browser–

at least, part of it does.

So the very first
step, the crawling bit,

is pretty much a browser
coming to your page,

either because we
found a link somewhere,

or you submitted a
site map, or there’s

something else that basically
fit that into our systems.

You can use Search Console
to give us a hint and ask

for re-indexing, and that
triggers a crawl before–

done that before.

MARTIN SPLITT: Oh, very good.

SUZ HINTON: We asked
for it to be done.

that is perfectly fine,

but the problem then,
obviously, is how often do you

crawl things, and how
much do you have to crawl,

and how much can
the server bear.

If you’re on the
backend side, you

know that you have
a bunch of load,

and that might not be
always the same thing.

If it’s like a Black
Friday, then the load

is probably higher
than on any other day.

So what Googlebot does is
it tries to figure out,

from what we have in
the index already,

is that something
that looks like we

need to check it more often?

Does that probably change?

Is it like a newspaper
or something?

SUZ HINTON: Got it, yeah.

is that something

like a retail site that
does have offerings that

change every couple of weeks?

Or even do not change at
all because this is actually

the site of a museum
that changes very rarely?

For the exhibitions maybe,
but a few bits and pieces

don’t change that much.

So we try to like segregate
our index data into something

that we call daily or
fresh, and that gets

called relatively frequently.

And then it becomes less and
less frequent as we discover,

and if it’s something that is
super spammy or super broken,

we might not crawl it as often.

Or if you specifically
tell us, do not index this,

do not put this
in the index, this

is something that I
don’t want to show up

in the search results,
and we don’t come back

every day and check.

So you might want to
use the re-index feature

if that changes.

You might have a page that you
go, no, this shouldn’t be here,

and then once it
has to be there,

you want to make sure that we
are coming back and indexing


So that’s the browser bit.

That’s the crawler part, but
then a whole slew of stuff

happens in between
that happening,

us fetching the content
from your server,

and the index having
the data that is then

being served and ranked.

So the first thing is
we have to make sure

that we discover if you have any
other resources on your page.

The crawling cycle
is very important.

So what we do is, the moment
we have some HTML from you,

we check if we have
any links in there,

or images for that
matter, or video

something that we
want to crawl as well,

and that feeds right back
into the crawling mechanism.

Now, if you have a
gigantic retail site,

let’s say, just
hypothetically speaking,

we can’t just crawl
all the pages at once,

both for our
resource constraints,

but also we don’t want to
overwhelm your service.

So we basically
try to figure out

how much strain we can
put on your service

and how much resources
we’ve got available as well,

and that’s called the
crawl budget, oftentimes.

But it’s pretty tricky to
determine, so one thing

that we do is we
crawl a little bit,

and then basically ramp it up.

And when we start
seeing errors, we

ramp it down a little bit more.

So oops, sorry, for that,
we are not– oh, ugh.

So whenever your service
serves us 500 errors,

there are certain tools
in Search Console that

allow you to say, hey, can you
maybe chill out a little bit.

But generally, we don’t try
to get all of it at once

and then ramp down.

We are trying to carefully ramp
up, ramp down again, ramp up

again, ramp down again, so
it fluctuates a little bit.

SUZ HINTON: There’s a
lot more detail in there

than I was even expecting.

I didn’t even know that–

I guess I never considered
that a Googlebot crawling

event could put strain
on somebody’s website.

That sounds like it’s a
lot more common than I even

thought it would be.

happen, especially

if we discover, say,
a page that has lots

of links to subpages pages.

Then all of these go
into the crawling queue,

and then you might–

let’s say you have 30
different categories of stuff,

and each of these have a few
thousand products and then

a few thousand
pages of products.

So we might go, oh, cool, crawl,
crawl, crawl, crawl, crawl,

crawl, crawl, and then we
might crawl a few hundred

thousand pages.

And if we don’t spread
that out a little bit–

so it’s a weird balance.

On one hand, if you
add a new product,

you want that to be surfaced
and searched as quickly

as possible.

On the other hand,
you don’t want

us to take all the bandwidth
that your server offers.

I mean, cloud computing makes
that a little less scary,

I guess, but I
remember the days–

I’m not sure if you
remember the days where

you had to call someone,
and they ask you

to send a form or fax a form.

And then two weeks later, you
get the confirmation letter

that your server
has been started.

remember the days

when we would have to call,
and then we would basically

pay $200 to have a
human go down the aisles

and push the physical reset
button on the server, so yeah.

MARTIN SPLITT: Those times
were a lot trickier, yeah.

And then imagine you basically
renting five servers somewhere

in a data center, and
that taking a week,

and then we come and scoop
up all your bandwidth.

And you’re like, great,
we’re offline today

because Google
has its crawl day.

That’s not what we want to have.

these days, it’s

more like a happy news kind
of moment, when you get hit.


feel like you’re

much more considerate than–

MARTIN SPLITT: Yeah, we try
to not overwhelm anyone,

and we respect the robots.txt.

So that works within
the crawl step as well.

And once we have the
content, we can’t

put strain on your

anymore, so that’s fantastic.

But modern web apps being
mostly JavaScript driven,

we then put that in
a queue, and then

once we have the
resources to render it,

we actually use another
headless browser kind of thing.

We call that the Web
Rendering Service.

Then there’s other
crawlers as well

that might not have the capacity
or the need to run JavaScript.

This is like social
media bots, for instance.

They come and look for metadata.

If that meta tag is
coming in with JavaScript,

you usually have a bad time,
and they’re just like, sorry.

SUZ HINTON: Yeah, so that’s
always been a big mess,

and I remember when single
page applications, or SPAs,

really came into vogue.

A lot of people were
really concerned.

There’s a lot of FUD around.

Well, if crawlers in general
don’t execute JavaScript,

then they’re going
to see a blank page,

and how do you get around that?

So contextually,
within Googlebot,

it sounds like Googlebot
executes JavaScript–


SUZ HINTON: Even if it does
do it at a later point.

MARTIN SPLITT: Yes, correct.

SUZ HINTON: So that’s good?

MARTIN SPLITT: That’s good.

is there anything

that people need to be
aware of beyond just,

oh, well, it’ll just
run it, and then

it’ll see exactly the same
thing as a human with a phone

or a desktop would see?

a bunch of things

that you need to be aware of.

So the most important thing
is, again, as you said,

it’s deferred.

It happens at a later point.

So if you want us to crawl your
stuff as quickly as possible,

that also means we have to
wait to find these links

that JavaScript injects.

Basically, we crawl, we have
to wait until JavaScript

is executed, then we
get the rendered HTML,

and then we find the links.

So the nice little
short loop that

finds these links relatively
quickly right after crawling

will not work.

So we will only see the
links after we render it,

and this rendering can take
a while because the web is

surprisingly big.

just a little bit.

MARTIN SPLITT: There’s 130
trillion docs in 2016, so–

there’s way more now.

There’s way more now.

There’s way more than that.

robots.txt is very

effective at being able to tell
bots how to do a certain thing.

But in this scenario,
how do you tell

that it’s Googlebot visiting
your site as opposed

to other things?

as we are basically

using a browser in two
steps– one is the crawling,

and one is the
actual rendering–

both of these moments, we do
give you the user agent header.

But basically,
there’s the string–

literally the string
Googlebot in it.

so straightforward.

and you can actually

use that to help with your
SPA performance as well.

So as you can detect
on the server side,

oh, this is Googlebot
user agent requesting,

you might consider sending
us a prerendered static HTML

version, and you can do the
same thing for the others.

All the other search engines
and social media bots

have a specific string
saying that they are a robot.

So you can then basically
go, oh, in that case,

I’m not giving you the real
deal, the single page app.

I’m giving you this HTML
that we prerendered for you.

It’s called dynamic rendering.

We have docs on that as well.

SUZ HINTON: The one thing
that still doesn’t quite

make sense to me is
does the Googlebot

have different contexts?

Does it sometimes
pretend that it’s–

I think of it as this
little mythical creature

that’s pretending to
do certain things.

So does it pretend to be on
a mobile, and then desktop?

Are the different, I
guess, user agents,

even though it still
says Googlebot?

And can you differentiate
between them?

MARTIN SPLITT: You’re asking
great questions, because yes,

we have different user agents.

So I’m not sure if you heard
about mobile first indexing

being rolled out and happening.

SUZ HINTON: I’ve heard
that it’s going to affect

how you’re ranked potentially.

MARTIN SPLITT: That as well.

SUZ HINTON: I don’t know if
that’s a rumor or not, yeah.

two different things

that get conflated so often.

So mobile first indexing
is about us discovering

your content using a mobile user
agent and a mobile viewport.

So we are using
mobile user agents,

and the user agent
strings says so.

It says something about
Android in the name,

and then you’re like, aha, so
this is the mobile Googlebot.

We have documentation on that.

There’s literally a
Help Center article

that lists all these things.

So we try to index
mobile content

to make sure that
we have something

nice to server for
people who are on mobile,

but we’re not pretending
random user agents or anything.

We stick to the
user agent strings

that we have documented
as well, and that’s

mobile first
indexing, where we try

to get your mobile content
into the index rather

than the desktop content.

Then there’s mobile readiness,
or mobile friendliness.

If your page is
mobile friendly, it

makes sure that everything
is within viewport,

and you have large enough
tap targets and all

these lovely things, and that
just is a quality indicator.

We call these signals.

We have over 200 of them.

SUZ HINTON: That’s a lot.


So Googlebot collects
all these signals

and then stuff them, as
metadata, into the index.

And then when we rank, we’re
like, so this user’s on mobile,

so maybe this thing that has a
really good mobile friendliness

signal attached to it might
be a better one than the thing

where they have to pinch
zoom all the way out

to be able to read anything,
and then can’t actually

deal with the different
links because they’re

too close to each other.

So that’s one of the many–

it’s not the signal.

It’s one of the many signals.

It’s one of the over 200
signals to deal with.

SUZ HINTON: I had no
idea there were 200.

That’s making me–

I know that you’re not
allowed to share what they all

are because there has to be
a certain mystique around it,

because of, I guess, a lot
of SEO abuse in the past.

yeah, unfortunately, that

is a game that is
still being played,

and people are doing weird
stuff to try to game us.

And the interesting thing with
this is, with the 200 signals,

it’s really hard
to say which one

gets you moving in the ranks.

SUZ HINTON: The weights
of each signal because–

MARTIN SPLITT: And they keep
moving, and they keep changing.

I love when people are like, no,
let’s do this, and then, look,

my rank changes.

Yeah, for this
one query, but you

lost on all the other queries
because you did really

weird and funky stuff for that.

So just build good
content for the users,

and then you’ll be fine.

SUZ HINTON: I feel like that–

it feels like less
effort as well,

than constantly trying to–

it’s not an easy answer.

You pay me to make you more
successful on search engines,

and I come to you and say,
so who are your users,

and what do they need,
and how could you

express that so that they
know that it’s what they need?

That’s a hard one because
that means I basically

bring the ball back
to you, and now, you

have to think about stuff and
figure it out, strategically.

Whereas if I’m like,
I’m just going to get

you links or do some
funky tricks here,

and then you’ll be
ranking number one.

That’s an easier answer.

It’s the wrong answer, but
it’s the easier answer.

So people are like, links are
the most important metric ever,

and I’m like, no.

We have over 200,
and it’s important,

but it’s not that important.

And chill out, everybody.

But this still happens.

glad it’s better now.

I feel, actually, more at peace
in general with SEO, as well,

after speaking to you today.

MARTIN SPLITT: Ah, so good.

Suz, thank you so
much for being with me

here, and has been
a great pleasure.

thanks for answering

all of my weird and wonderful
questions about the Googlebot.

Perfect questions.

Perfect opportunity.

Did we bust some myths?

SUZ HINTON: I feel like we did.


I think that’s
worth a high five.

SUZ HINTON: Awesome.



Join us again for the next
episode of “SEO Mythbusting,”

where Jamie Alberico
and I will discuss

if JavaScript and SEO can be
friends and how to get there.


The Art of the Guest Post

If you own a website you know how much time and effort is involved with creating the perfect piece of content. However once you have created it you also need to find a way to get people to view it. This is the area that most people struggle with. The main reason is that the sheer volume of content created each day is astronomical.

It has recently been calculated that each day there are over 2 million blog posts. Or if you would like a more visual reference this is enough content to fill Time Magazine for 770 years. Your little piece of content can easily get lost and forgotten about.

There are a number of different ways to gain the initial traction and increase your sites viewer readership however there are a few that are more beneficial, especially if you’re willing to put in time, than piggybacking ‘ upon other people’s hard work. We can do this is by creating a super high quality guest blog.

Essentially you need to do is find a larger authority website in a similar niche to yours and then to offer them some free content in exchange for an author link back to your website. A few years ago this is a very powerful method of gaining a lot of SEO juice, however in recent years Google has reduced the overall flow on effect. However it is still a highly effective way to leverage an already established audience and readership.

The big issue is the way many people approach this method. Most reach out with a cold email, being leading to the content listed with was still sending in an already published piece of content. The key thing to remember is that a content manager running a successful blog does not have the time to babysit someone elses fledging business.

Guest posting is about leverage
When we are talking about leverage need to realise that it is really a two-way street. You need to be able to offer the blog owner enough value that they will be willing to directly post your piece of content on their website. After all they are risking their audience and credibility by letting someone new create a piece of content on their site. It really needs to be worth the time.

You need to write a piece that is as good as or even better than what is currently on the site, you also need to make it interesting so that people reading the piece become engaged and respond to it.

Automated Drop Shipping with SellerBot Our Full Review

One of the questions I am regularly asked is what is the best system to use for a dropship website?

There are a few different options available at the moment on the marketplace, one of the newer options I have recently come across is SellerBot (available at

I managed to get in touch with the owners so that I could review their product.

There is also a demo of the e-commerce system available that you can preview as well.

You can see this at:

Username: demo
Password: demo

this area looks like:



When you first log in you will see the dashboard. This area looks reasonably straightforward to use, the menu items are on the left-hand side of the screen and categorized by different headings.

You can also instantly see the key stats about your products including:

Most Searched Keywords
Most Found Products
Most Opened Products
Most Compared Products



If you scroll down the page even further, you will be able to see how your store has performed (in terms of sales).

This area gives a great snapshot of how your store is performing and conveniently displays the following:

Total Sales:
Total Sales This Year:
Total Orders:
No. of Customers:
Customers Awaiting Approval:
Reviews Awaiting Approval:
No. of Affiliates:
Affiliates Awaiting Approval:

Simple stats


The SellerBot dashboard does give you a simple overview of how your store is performing at any given time.


Products area.

Your products.

The product page can be accessed easily by clicking products > your products.

This displays a list of all the products that are currently loaded to the store. A great feature is the in-line editing that is available on this page. You can simply click on a product name, model, price, quantity or its status – type a new text or information and then click the update button to instantly save your changes.

This would be a great times saver as you can easily modify a number of products at the same time.

Is also a in a button that you can click if you need to modify the product even further. This area lets you not only change all the information about the product images but also assigned categories, create sales and discounts and even select ‘related products’ that will help you cross sell your items.

This area does seem very in depth so may take a few minutes to get used to, however the extra control that you receive is very beneficial.

Video of this area.

The Camtasia Studio video content presented here requires a more recent version of the Adobe Flash Player. If you are using a browser with JavaScript disabled please enable it now. Otherwise, please update your version of the free Flash Player by downloading here.




The categories area seems very straightforward to use. You can easily add in new categories and subcategories or edit already live categories with a few clicks of the mouse.


Video of this area.

The Camtasia Studio video content presented here requires a more recent version of the Adobe Flash Player. If you are using a browser with JavaScript disabled please enable it now. Otherwise, please update your version of the free Flash Player by downloading here.


Product options.

The product options area may also take a bit of time to get used to, however again it does give a lot of control.

Product options area creates the different options that can be available for products (i.e. sizes, colours etc) these can then be displayed on each individual product page a drop-down box or as a check box.

To be even more controlled out our attribute and manufacturer areas as well which let you further group products together.


Video of this area.

The Camtasia Studio video content presented here requires a more recent version of the Adobe Flash Player. If you are using a browser with JavaScript disabled please enable it now. Otherwise, please update your version of the free Flash Player by downloading here.



There is a complete section for the information that is displayed on your website. It is split into two different areas depending on how the content is displayed on your website.

Information pages.
These are the static pages on your site that generally do not change
(i.e. about us, terms and conditions, privacy policy etc).


Video of this area.

The Camtasia Studio video content presented here requires a more recent version of the Adobe Flash Player. If you are using a browser with JavaScript disabled please enable it now. Otherwise, please update your version of the free Flash Player by downloading here.


Blog area.
The blog area lets you display news on your site. This is store by date with the newest entry and the older ones below it.













How to Dropship Successfully

When it comes to running your own business, there are many elements that have to pull together in order for you to become successful. From your marketing campaign to the point where the product arrives in the customer’s hands, all of these things must work smoothly for your business to be successful. One of the most important aspects is how to dropship successfully which is vital when selling products online.

What is Dropshipping? Continue reading

How to Dropship Successfully

How Does Drop Shipping Work?

When it comes to starting up your own business, especially one on the internet it can be a difficult task to know what to sell and how to deliver the products to the customers. This is especially true for small business owners and entrepreneurs who do not have the money to store the products that they have for sale. Continue reading

How Does Drop Shipping Work?

The Drop Shop Business Model

When it comes to starting up your own business the easier it is to get your products in the hands of the customer, the better for your company. For small business owners and entrepreneurs one of the biggest obstacles to overcome is streamlining your shipping process until it is the model of efficiency.

Fortunately, there is a method which has been in use for years that is highly popular and offers businesses of all sizes the ability to ship items quickly and efficiently without ever having to touch the product in the process. Drop shipping is considered one of, if not the best way for many types of businesses to ship goods to consumers. Continue reading

The Drop Shop Business Model

Drop shipping vs inventory

One of the questions that we regularly get asked is how does drop shipping compare to holding inventory. Hopefully the below will help you understand the differences between drop shipping vs holding inventory.

To accurately answer this question you would need to go through both the positives and negatives of these logistical options. Continue reading

Drop shipping vs inventory

How many dropship suppliers should you use?

One of the main benefits of drop shipping is that you can list products that you do not currently have physically available. This means you can essentially list in almost an unlimited amount of different products from different manufacturers.

At the first glance this may appear to be a great idea,  however if you start selling hundreds of different products all from different suppliers you will soon create an logistical nightmare. Continue reading

How many dropship suppliers should you use?

How does dropshipping work?

The question that we get asked time and time again is:
How does dropshipping work?

Drop shipping may sound like a complicated way to sell products, how ever once you have looked into the sales process it is one of the easiest and cheapest ways to sell online.

Drop shipping is a form of supply chain management. Drop shipping dictates how you list items for sale, how you sell your items and how you send goods to customers.

So how does drop shipping work? Continue reading

How does dropshipping work?