Huh? Yesterday, someone pointed out a technology on the PCGS forum that was used to image cultural artifacts, including coins, called Reflectance Transformation Imaging. The organization that does this has a demo video that shows using it to view a coin, changing the lighting characteristics after the picture has been taken. Here's a short video demo. The issue with showing someone a picture taken using this technique is that they need special viewing software to view it, but there is viewing software available as a free download (with non-numismatic sample images). The "image" that is being viewed has been stripped of all shadows and highlights. Each pixel represents the actual color of the coin as seen when the light is ideally placed to see that color. The viewing software is what provides a "virtual light" to view the coin. In order for the light to work, the coin now has to be a 3D surface, rather than a 2D image. In addition to the color, the elevation, surface normal, and reflectivity of that point of the coin must be known. All of these are derived from the initial set of photographs that are taken of the coin. A spot that varies from really bright to really dark is highly reflective, while a spot that doesn't vary no matter how it's lit could be fine corrosion. The surface color, reflectivity, normal, along with the size, shape, color, and position of the light (not demonstrated was how to diffuse or harden the light) determines how the coin is rendered for the viewer. Pretty cool. The amount of data to transmit to someone for viewing, assuming they have the viewing software either installed on their device or available as a WebGL (or other similar technology) browser plugin would be more than for a normal image, but not by orders of magnitude. Each pixel needs, in addition to the RGB value, a z-coordinate, a 3D surface normal (x,y,z), and reflectiveness, which comes out to under 3x as much data as a plain old 2D image. It probably wouldn't be very tolerant of lossy compression, but it is still far less data than a video and gives the viewer much more flexibility. So would taking the 24 pictures (the example in the article) slow the process down so much that it would make a TrueView cost $100 per coin? Nope. Much of the time spent taking the perfect coin shot is adjusting the lighting manually and interactively until it looks best. This is no longer necessary, as the privilege (burden?) of lighting is shifted to the viewer. The photographing step, including the RTI surface generation could be fully automated. The artistic work of the photographer would be limited to setting up an ideal representative traditional image by moving the virtual light(s) around and taking a virtual photo. Will I be offering this service soon? Nope. Way too complicated for a low-volume one-man shop to set up at this point. Would I if I could? Damn straight! The viewer and the software that generates the special image format are free downloads. A rig to take the pictures would ideally have a robotic arm holding a light, with both the camera and robot controlled by the same software, tagging each photo with the lighting position. The series of lighting angles used would then be repeatable every time.
My first two questions would be about how it handles White Balance and thin-film interference. Promising stuff, though.
White balance is a non-issue for the technology, as it's the responsibility of the photographer to guarantee that, and the white balance doesn't shift with lighting angle. Thin-film interference is an issue I thought of. The premise behind being able to change the lighting is that the hue and saturation of a pixel are constant, but the lightness can change. With stuff like proof copper, this isn't true, so I imagine it wouldn't work as well unless this technology additionally modeled hue shifts at each pixel. I'm guessing they haven't gotten to that yet.
HDR imaging (containing 96 bits of data in the image) can also be controlled by the viewer's software. Shadows can be "opened", and blown-out portions better controlled. I have yet to test it on coin images! Should work fine, it is not 3D, but rather contains data per each image which allows alteration of viewing the colors, intensities, and brightness, not limited to 16 bits of data per image!! I have provided all of the simple programs needed to view and work with HDR images. Especially pseudo-HDR, using basic simple digital camera images. I explain it all here, and link you to the free downloads, some of which which are on-site. (mostly Windows XP and Windows 8). http://www.biblical-data.org/HDR.html Gary in Washington
Don't get me wrong; I'm utterly fascinated by it. For our purposes, given the relatively uncomplicated circumstance of coins with their lower relief, it'd be easy to rig a hemispherical LED array and in conjunction with remote shooting software script the entire shooting and lighting sequence into a one-click affair. Given the 45-50mm diameter of duplicating lenses, you could come closer to true vertical lighting than their full-frame lens setup. Then, rather than requiring the viewer to download and implement a software solution, you just pick and choose among the shots to create an HDR composite 2D image of the coin with no lighting weaknesses. Instant forum-postable, gradable imagery. Another potential usage is in conjunction with rmpsrpms' 3D imaging/.gif technique. Mount the camera - something like a Canon SL1, which barely exceeds 14 ounces - on a ball mount/arm, and vary camera position just like you do the lighting. That would require an open lighting framework rather than the matte black enclosed hemisphere I'm envisioning from above, therefore stricter environmental lighting control, but the result would be imagery of sufficient resolution to identify detail features without additional magnification. That'd require me to go to 32GB of RAM in my computer, I think. Serious CPU crunching required. You gotta see this, @rmpsrpms.
This post has a sample system: https://culturalheritageimaging.wor...portable-dome-rti-system-for-imaging-lithics/ And a preso here:
There is a limitation to this approach, however, in that whenever you look at a coin, you tilt it around in the light. An HDR 2D image may be the basis for the color information, but not surface or luster quality. Being able to move the light around in viewing is equivalent to tilting the coin, assuming stuff like luster plays nice with this technology. Where I see an interesting combination possible is combining this with focus stacking so that you can use this for "virtual microscopy". You'd need a lot more pictures at every plane, to capture the surface normal characteristics accurately for all the in-focus pixels, but then you'd have more accurate 3D information and also be able to move the object in addition to the light. With the 3D information, you can also re-focus the image at viewing time, but you'd need that built into the viewing software, of course.
It's a very cool technique! I remember seeing a presentation of an early version when I was at HP/Agilent. I think the pic showing the guys with the tripod and string to set the lighting distance was from that original presentation. I don't know how well it will work for regular coin imaging. The issue is that the final file doesn't show the actual lighting samples, but an averaged rendering. So nothing that you see is "real" in terms of lighting, so falls short of giving you the "in-hand look". I've often contemplated doing a similar dome, and then creating an animation of the actual images, to show what the coin would look like lit from various angles. Could be done without any special software, just simple animations. Would not need to be 48 images either. Maybe I'll give it a go...
It looks like my second post was lost... There is a 3d printing model of the dome here: http://emkpph.cias.rit.edu/RTI/rti-half-dome-130mm.stl and an article on construction here: http://firstmonday.org/ojs/index.php/jbc/article/download/6625/5247 which includes partial wiring diagrams, better photos of the nuts and bolts and some Arduino code.
I've done animated GIFs with 6 different lighting positions. The files start getting big due to how all the frames have to be stored. There's also no "off switch" on an animated GIF. I guess comparing this result to a virtually lit RTI of the same coins would tell you how well the technique shows you everything. I assume the $20 Saint would work pretty well, but the toned Lincoln would have problems with the hue shifting with the lighting.
Do remember the original purpose is not to create a better picture that matches the object in hand, it's to reveal faint, worn, etc. markings. Same with the false color. So if that's of interest to you... From the original picture using the light and string, it also seems you don't need as many photos. I wonder what happens if I set up 5 Jansjos in a circle and take a photo with each one to feed into the software.... Finally I think they are missing a step. 48 White LEDs (or whatever) are not going to have uniform brightness nor exactly equivalent color. Shouldn't there be a 18% grey card shot first to create the 'norm'?
I wondered about that as well. In order to replicate the full reflectance field of a coin, wouldn't you have to cover both all light positions and all camera positions? Or does it turn out that all light positions with one camera position (like we're discussing here), or all camera positions with one light position (like tilting and turning a coin in front of your eye), are each enough by themselves?
Either is sufficient, but moving the light is easier, as it ensures every image can be precisely registered with every other image. What you're after is knowing for each point on the coin that maps to a unique pixel in a 2D image, what is the color, specularity, and surface normal. The surface normal is what's key to simulate moving the light around in viewing. Imagine you are taking a picture of a flat mirror, tilted in some unknown direction. You aren't allowed to look at the mirror in space, only in the photo. You need to determine the direction the mirror is pointing, which is the surface normal of the mirror. You have a light source, the location and direction of which you can know. If you move the light around and take a bunch of pictures, some will show the light reflecting toward the camera, some away. One might show it very bright, since it's reflecting directly into the camera. This is the picture that indicates the surface normal of the mirror. Since the location of the light source is known we can determine the surface normal by bisecting the angle between the light and the camera (angle of incidence = angle of reflection). For a coin, you do this calculation over the entire image. For a 1000x1000 image, you have 1,000,000 different surface normals. These should allow things like luster to be shown correctly, since each pixel of the image now knows how to behave with different locations of the virtual light.
That's one application. For AU-BU coins, the application of being able to tilt the coin in the light and detect or assess surface imperfections or rub may be equally viable. You'd have less precision in your surface normals, which means the depiction of the item in the virtual light would be less accurate. Yes, there should be a calibration step to ensure that all lights are equivalent, but that's outside the scope of the technology.