Waterstone testing part III, what’s the stone actually doing?

Hello again faithful reader,

And there appears to be quite-a-lot-of-you out there now. Was it something I said?

So let’s recap. We have a measure of how ‘fast’ these stones are and the results are perhaps not entirely clear. We have an arbitrary measure of how flat the stones stayed. But as you will soon see, the previous set of results is more a measure of stone consumption not flatness. What we need to do is to tie in these two measures with a series of observations that will allow us to form a better picture of what each stone does with each of the different steels tested.

Simply put, these stones are all a step in the process. They are intended to leave the blade that touches them in a good condition for the next stone in your sharpening setup. It’s all well and good being fast and long lasting, but if the blade is in such poor condition that the next stone has to work too hard, then the apparent speed means nothing, since the time saved at one step is lost in another. The savings from having a long lasting stone are also reduced because, without exception, the next stone will be more expensive than the previous stone.

The best case would be to have a stone that is economical (good cost/life ratio), works quickly on the range of steels we require it to work with, and leaves a good, smooth, and flat surface behind for the next stone.

This is the search for such a creature, if it even exists.

So, what’s going on here then?

What this part of proceedings represents is a visual inspection of both the stone and the blade’s bevel and a grading of what was observed, scored from 1-10. This visual inspection/grading was carried out after each stone had raised a burr on the back side of the blade, however many strokes that may have been, and subsequently its task for whichever given blade was completed. The number of strokes can be seen in part one, and in most cases the stone surface and bevel was photographed.

Next, the blade would receive 20 half stone length strokes on a Naniwa Superstone of #5000 grit. I could have used any number of stones, but chose the Naniwa Superstone in this grit mainly because it is known to stay flat, it is very consistent and as is common with Superstones, works quite slowly. 20 strokes is usually not enough to return the blade to completely sharp again unless the blade is left flawless by the previous stone. Anything undesired in the edge will VERY quickly show up as lines or grooves in the bevel, splotchy areas (some polished, some not) and by how much of the bevel is polished.

Finally, the edge was again inspected and photographed. In this case, we are looking for any flaws in the bevel no matter how small. It was also inspected through a 100 power pocket microscope as an additional tool. Again, this was graded on a scale of 1-10.

Now, I know this method may bring some protests. I ask that you who disagree to consider this:

  1. The goal here is NOT to bring the blade being tested back to a usable sharpness, but to diagnose what the previous stone has done to the blade’s bevel. In this capacity, the Naniwa Superstone is without peer.
  2. Another train of protest might call out “but you skipped a grit!” Did I? If the workhorse in the sharpening setup, the #1000 grit, does a good enough job then why is the very next grit even needed? You could easily “skip a grit” or three without penalty of time, complexity or monetary cost.

And in case you were wondering, this is where it gets ugly. The previous two sets of results have been pretty predictable. For this set of results, the crystal ball is cracked.

High marks on this chart indicate a good surface finish with regards to scratch depth, flatness, and overall condition. Any stone that scores well here generally leaves a better surface on the blade than a stone scoring poorly. This ‘better’ (more consistent, finer finished) surface means that the next stone should have less work to do before it has removed all scratches left by this medium grit stone. An inferior score here means that the next stone will be required to do more work, and that next stone may be required to be of a coarser grit than a better scoring stone might allow. In simple terms, a stone scoring well here means it will likely make your work easier with finer stones, a poor score will make your work more difficult that it needs to be.

There are a few standouts here that I’d like to point out:

  1. First is of course, the Naniwa Superstone. While it’s been left behind previously, it’s the definite stand out here. What it does to blade edges is something that needs to seen to be believed. It is however an extreme example of good finish at the expense of speed and ability.
  2. The next stand out is the Sigma Power Hard. What’s worthy of mention here is that in previous tests, it was keeping up. Not the fastest, not the slowest and more likely to be in the top half of things. And yet for its ability to at least “keep up”, it still delivers a blade absolutely ready for the next stone, and naturally does it quickly and without fuss.
  3. Not too far away from the Sigma and just on the other side of the “speed/finish” hump is the King Hyper. Usually just a little faster than the Sigma Power, but gives up some points in the appearance stakes. So close is the performance of the King Hyper and Sigma Power Hard, that they may be related.

Speaking of being related, we can see something of a shock here with the Shapton Trio.

  1. Starting with the Glass Stone, which did seem to be working quite well most of the time, appears to have poor results here. (I’ve actually gone back over my results several times to try to discover why this happened, and if I’d made a mistake. On reflection, the score stands as it is).
  2. In the same boat, the Shapton Professional pair. While sharing slightly different labelling, there is next to nothing to differentiate these two from each other. There was some give and take through the testing, but the end result is an identical score. This is corroborated later in the results.

And there’s a very good reason for this rapid speed, but poor finish. Shapton marches to a different tune to the rest of the stones tested here. If you own a 1000 grit Glass Stone take a look at the back where it clearly states that the abrasive is 14.7 microns. However, the current JIS Standard R6001-87 states that 1000 grit/mesh is 11.5 (±1) microns.

Before I started testing I was aware that Shapton used a different grit grading system to the JIS standard, but was not sure if it would in fact make a difference. I think now it’s very obvious that it does indeed make a difference. Also note that Norton uses a similar grit grading size (14 microns) and presented similar bevel finish scores. Additionally, the King Deluxe is an old stone and most likely uses the old JIS standard, putting its results here into perspective. Just so you know, the old JIS standard puts #1000 grit at 15.5±1 micron, which clearly suggests where Shapton and Norton got their grit numbers from. In the case of the King stone, it’s very likely a case of a grit sizing holdover because the basic stone has not changed in decades.

I’m not quite sure what else to say here, other than when looking at the previous charts with regards to speed and consumption of stone keep in mind that the Shapton Trio, Norton, and King Deluxe are the same grit as the King Neo and approximately 20% coarser than everything else being tested here.

Another straggler is the Sigma Power Select II, but the explanation is easier here. The Select II has no binder, only abrasive. As such it is quite friable and is always releasing grit during use. Therefore, the scratches present in the bevel, which drop its scores in this company to “well below average,” are what you would expect from a fresh stone. The way it works causes the scratches, but also makes it untouchable for speed. You win some and you lose some.

And of course, this evaluation of the bevel would be best served by actual pictures of what is going on. To give an idea of what the respective scores look like, a sampling of them in pictures appears below.

First, an example of a 4/4 (off the stone/after #5000) in blue steel from the Oribest…

I regret the images are not as clear as I would like, but do take note of the middle picture, off the stone. Note that the finish is not clear and uniform, and seems to have a few scratches in it. This is compounded after the #5000 stone in scratch marks in the hard steel area, and no effect on the softer iron backing. This told me that the stone not only left a not terribly great finish on the bevel, but the stone may also be out of flat somewhat. Not much, but enough to make a difference.

Next, an example of a 5/6 finish from the Shapton Glass Stone on blue steel;

The finish off the stone here is more uniform (the splotches are water) but on closer examination, there were still some very small scratches that the camera could not pick up. To quote my notes; “Smooth, machined finish. Many tiny scratches, good”. The polished example again looks very good, but those tiny scratches were still present, which downgraded the score to only a “6″.

Lastly, and example of a 7/8 finish from the Naniwa Superstone on blue steel;

Notice that the hard steel is well polished (not misty or incomplete) and my notes state; “Great, smooth. Few tiny scratches. Polish is full depth and very clean. Excellent!” I only wish it showed up better in pictures than it does. The splotchy appearance in picture 2 is actually the backing iron sporting something of a polish itself, from the #1000 stone. A common trait of the Naniwa Superstone.

There will be more pictures to come as I sort through them. As you might expect, there are quite a few…

Until next time, thanks for reading.

Stu.

5 comments to Waterstone testing part III, what’s the stone actually doing?

  • -dg

    This has been a very interesting series. What stood out for me was how versatile the Bester and Arasahiyama seem to be. They cut fast on all the usual steels, and at least adaquately on the CPM steel, stayed flat, and left good finishes on everything. In particular, the Bester is the only stone in your lineup that never got below a 6 on any of the finish scores. All the others have some 5s, or 4s or worse.

  • Hi Dave,

    You know something, you’re right!

    I’m not actually looking any further ahead than what’s being done at the present time. There are no winners trophies up for grabs, just an honest comparison of what’s out there. And believe me, I did not think the Bester actually did all that well. It’s got some issues I don’t especially like that I’ll touch on later. Nothing serious, just slight annoyances. The Arashiyama however is the big surprise. It’s just good, real good. I don’t think I’d want anything changed on it, but I wouldn’t want it as my only #1000 stone either. It too has some slight issues and it’s not something I’d want to be using every day. Exceptionally good, inexpensive, what’s not to like?

    But I’m getting ahead of myself here. All I’m really doing is looking at the results I have written down. I took little notice of what stone did what, only making sure the grade was attributed to the correct stone.

    I am glad you are enjoying this comparison, it’s been a long haul for me, but ultimately worth it I think. And as long as someone, anyone(!) gets some value from it, that’s reward enough.

    Stu.

  • Eric

    Stu,

    So are you going to be selling the Arashiyama stones?

    -Eric

  • -dg

    I am waiting patiently (hopping from foot to foot, waving hands about, pacing around room …) for the next installment.

    I was impressed by your procedure. It is not an easy experiment to design, and quite possibly your method of blunting is inconsistant enough to compromise the results, but at least your tests try to be objective and to measure meaningful dimensions in constrast to the usual subjective handwaving on this topic. Excellent work.

    That said, I’d love to hear your subjective handwaving too.

  • Hi Dave,

    You are correct, the results could be inconsistent, but with my limited resources, especially time, I’ve done what I can, and can see some obvious trends occurring.

    I did find however that after 3 runs doing the same test on the same stone, the results ended up fairly consistent throughout. Maybe a half dozen throw outs from the whole lot, and in those cases I simply averaged from 2 instead of 3.

    It’s given me an intimate knowledge of the stones tested, and these results are as interesting to me, since I played no favourites. It’s very important to me that if someone else takes up the torch and tries to repeat my testing, they come up with a similar result.

    It would have been VERY easy to doctor the results to make one stone look better or worse, but on the particular stones that I might be inclined to favour (Sigma Power) I made sure I marked them harder than anything else, just in case they did well and I got criticized for shilling them here.

    The worst part is, the dang things really are as good, maybe even better, than I’m showing here. If that Sigma Ceramic Hard #1000 ends up looking like a winner, there’s no doubt that it’s earned it.

    Subjectivity comes later.

    Eric, I’m looking into it. I want to very much, but I need to source them from the manufacturer, which is often very difficult here. The problem is they are already available in the US, and at an excellent price. If I can’t even come close to matching it (which I can’t now), it’s difficult to justify listing them.

    And if my testing boosts sales for someone else, and that person is honest and honourable, I’m happy with that.

    Stu.