Building an Experimental Museum-based Experience with NFC, 3D Prints, and 3D Scans.

For a long time, I’ve been interested in digital experiences that deliver location-based content and information in heritage spaces and places. One of the complicating factors in these sorts of projects has always been how the thing that you’ve built knows where the user is at any given time, and therefor what content to push to them. There’s all sorts of ways to address this – GPS, wifi-positioning, low energy bluetooth, RFID, even QR codes. They all have their strengths and weaknesses. I’ve always wanted to experiment with NFC (Near Field Communication) tags as a way of pinpointing locations and delivering hyper-location based content to the user. They are cheap, don’t require on-board power to function (as they are powered via the reader through electromagnetic induction), come in lots of form factors, can be encoded with all sorts of information, and work pretty ubiquitously on both Android and iOS devices. In a heritage or museum context, one of the strengths of NFC is that it doesn’t require input or manual pairing. As long as the smartphone has NFC turned on (which is increasingly common these days as most smartphones ship with NFC radio enabled by default), it just works. NFC isn’t without their limitations, however. Probably the most significant thing is that NFC tags have super limited range. The smartphone has to be very close to the NFC tag. Generally, the user’s smartphone had to be no more than a centimeter away from the tag. In a lot of cases, the tag actually needs to be touching the smartphone in order to be read consistently. This is really the key difference between NFC and RFID, which works over longer distances (as much as 100ft of the RFID tag is equipped with a power supply).

NFC tags come in 5 different types (conveniently called types 1-5). The main difference between the various types is the amount of data that can be stored in the tag (ranging from 96 bytes for a type 1 to as much as 3584 type 5). There is also a little variety within each type depending on the tag’s specific chip. This is why you’ll often see chip name (such as NTAG215) instead of chip type (1-5 when you go to purchase. The one important thing I found is that some of the chip types aren’t re-writable. So, you write data to them once, they are essentially locked with that data. For more information in the general 1-5 types, check out this website or this website for more details about the chip types.

I got the opportunity to build something around NFC tags and hyper-location based information when I was approached in the fall of 2021 by Ramya Swayamprakash (now Dr. Swayamprakash) about doing some 3D prints of archaeological artifacts in my lab as part of her SEEK Fellowship. She and three other fellows were collaborating on an exhibit in the MSU Museum called The Observation Experiment.

Ramya was working with Dr. Jessica Yann (Archaeology Collections Manager and NAGPRA Program Manager at the MSU Museum) to explore how observation factors into the archaeological process. Ramya wanted to include 3D printed version of historic ceramics from the excavations at Fort Drummond (which are part of the overall archaeological collections we curate at MSU). The idea was that the 3D prints would be exhibited alongside the archaeological objects and would be handleable by visitors. Pretty run of the mill stuff, honestly…but I was happy to help out. I decided to take this one step further and envisioned a scenario where visitors could scan an NFC tag on each printed object and then view and manipulate the corresponding 3D scan on their smartphones. This allowed exhibit visitors to interact with facsimile of objects that, for obvious reasons, they can’t physically handle. NFC tags would provide that super granular locality. Scan one particular tag on one particular print, and see the 3D model of that particular object.

So, how did I build this little experiment? Three easy steps (well, 4 really, but I didn’t do the last one) :

Step 1: Scan the Objects

Ramya and Jessica chose a series of ceramic artifacts – 1 pearlware saucer that had been (mostly) refitted and 4 additional sherds (all from the same vessel). I’m well known for pronouncing loudly that I hate digitizing historic archaeological objects. They are often thin, have fiddly bits , and are shiny or transparent – everything that creates difficulty for 3D digitization. Please note, “fiddly bits” is a technical archaeological term. While the objects that had been chosen for this little project weren’t glass (probably the worst kind of thing to scan), they weren’t super easy to scan either. The glaze was shiny (as glaze often is) and quite fine (thin edges…also a pain to scan). Instead of using photogrammetry (which in hindsight I probably should have used) I did the 3D capture with an Artec Space Spider. It scans very, very quickly (one of the strengths of structured light scanning). The Spider isn’t all sunshine and roses, though. First, it is finicky with things that have thin edges, which was a pain on this project. In addition, most objects require you to do multiple scans, clean those up, and then align the various bits and fuse them into one model. This was particularly problematic with these objects as positioning them to get overlapping scans (which are critical to align the various bits) was challenging. The solution (as seen in the image below) was to place the refitted saucer in a container of sand, thereby holding it vertical and allowing me to scan around the entire saucer while holding the scanner in a natural, level position. Once the first scan was done, I’d flip it over so that the part that had been submerged in sand during the first scan was exposed and do a second scan.

Scanning with the Artec Space Spider.

Once the raw scan data was captured, I would cut out everything in both scans that wasn’t the actual saucer (the container of sand, any bits of the turntable, that got included in the capture) in Artec Studio (the software you use to both scan and process with an Artec scanner). The result was two very clean scans that could easily be aligned and fused into one object. Easy peasy, right? Nope. I discovered that Artec Studio has a software bug that doesn’t actually remove the textures associated with the geometry you’ve deleted. So, the untextured mesh looks great, but the textures are absolutely horrible – making the finished model basically useless. In the end, I was forced to scan the object sitting flat on the turntable (flipping in around for each part of the scan). While this is certainly the natural position of something like a saucer, it makes for an awkward scan. You’ve got to try to scan under the rim of the vessel so that you’ve overlap between the two chunks of the scan – not an easy process. The result was a model that was, shall we say, suboptimal. Even under the best circumstances, I’ve been pretty unhappy with the quality of the textures generated by the Artec devices (I’ve also got an Artec Eva in the lab). The published specs are 1.3 megapixels – which is not wonderful. In Artec Studio 16, the company introduced a photo texturing workflow which allows you to combine scan data (for geometry) and photogrammetry data (for texture). It isn’t perfect and it increases the model creation time significantly (killing one of the main benefits of the Artec scanners – speed). However, it might help with lower quality textures.

Once I was finished with the scans, I threw them up on Sketchfab on the lab’s account and fiddled with the model properties a little (lighting, material, etc). While I’m not a huge fan of SaaS tools for critical scholarly infrastructure, Sketchfab is a great for providing quick, online access to models. It is 100% not a preservation platform, but it has powerful tools for display and public access. In addition, Sketchfab has been very generous to the cultural heritage community, providing pay-tier account to museums, digitization projects, etc for free. Yes, it is absolutely in the company’s best interest to grow the content on their platform, and that’s fine as long as the digital heritage community understands this. Thomas Flynn, Sketchfab’s Cultural Heritage Lead, has been a member of the digital heritage community for a long time and is a great ally to a lot of the work that we do. I appreciate this commitment, and will continue to recommend them for institutions, projects, and scholars that don’t have the necessary technical infrastructure or capacity to self-host and display their own models using an open source tool such as 3DHOP or the Smithsonian’s Voyager.

The scan of the refitted pearlware saucer can be seen here:

The scans of the individual sherds can be found here:

Step 2: Print the Objects

I’ve got two 3D printers in my lab – an Ultimaker 3 Extended and a Makerbot Z18. Both are FDM (Fused Deposition Modeling) printers that work by heating, extruding, and fusing material into the final form. I generally print in PLA because its cheap, comes in a stupid array of colors, doesn’t have any fumes while printing, and is (technically) biodegradable. I buy from Matterhackers and can generally get a spool of filament for between $20 and $25. I almost always print on the Ultimaker. Its reliable and has a dual extruder (so you can print with two materials). The only thing that is good about the Makerbot Z18 is that it has a huge build volume. Other than that, it doesn’t have a lot to recommend it. As such, I almost never print with it. At this stage of the game, it’s become a bit of a prop.

Despite the fact that I just said the my lab’s Ultimaker is pretty reliable, printing these objects (both the refitted saucer and the individual sherds) was a pain in the ass. I had 3 failed prints during the process. Full on bird’s nests when I walked into the lab the next day

As such, it took a lot longer to get a couple of decent prints than I expected. The printer flat out stopped printing anything properly after I managed to get a second set of clean prints. Turns out that both extruders were gummed up and required some serious cleaning. I printed in a dull white filament, and the results were about as good as you can get from an FDM printer.

Step 3: Affix and Encode the NFC Tags

The NFC part of this has two components – the tags themselves and the process of encoding them with data (in my case, the URLs to the models on Sketchfab). After a little bit of research, I ended up purchasing some pretty straightforward tags – 35, 1 inch diameter, white, adhesive for a little over $15. This was mostly an “ok these are pretty cheap and don’t have horrible reviews” kind of scenario. You can buy these things everywhere and in a form factor that meets your project needs. The one thing that I did spend a little time on was ensuring the specific chip type would (1) accomodate a URL and (2) would be rewritable (just in case). I ended up going with NTAG215, which more than met my needs for this little project. The tags came on one long strip. You can imagine huge rolls of these things sitting in a warehouse in China, and they just unroll and slice off the amount ordered.

The easiest way to write data to an NFC tag is with a smartphone (with NFC support) and an app. There are a million apps out there, but after doing a little bit of poking around I settled on the NFC Tools app. Its got a free version and is available on Android and iOS. I’m still a little surprised at how robustly featured the free version is. It will write (and re-write) a surprising range of types of data. I went ahead and purchased the pro version on Android (cost me a $3.49) just to see the difference, and its really just a matter of the scope of features. The free version (which, rather surprisingly, isn’t add supported at all) is honestly all that I needed in this little experiment. Writing data to the tag is very easy. Open the app, scan the tag (make sure your smartphone has NFC turned on), select the add record option, choose what sort of data you want to write (in my case a URL), enter the URL, and hit the write button. Done. When you scan the tag, the phone’s default browser will open the URL. Couple of things to note. First, where the NFC antenna is located will vary slightly from device to device. As such, the user might have to move the device around a little bit in relation to the tag. In the case of my phone (a Google Pixel 6) the tag scanned just below the camera bump out. The other thing that I noted is that a phone case might impact with the scan. I had to ensure that the phone was touching the actual tag for it to scan. Without the case, it would scan up to about a centimeter away (but no further). Because of this, I encoded the tags before I affixed them to the prints (mostly so I could hold press the tag right up to my phone).

Once they’d been encoded, I simply peeled the tags off the strip and stuck them to the prints. Unfortunately, due to the shape of the prints, there wasn’t always a good (or consistent) place to stick them. In addition, some of the parts of the prints were a little rough because they had been locations for the printing supports, thereby making it a little more difficult to stick the tags. Thankfully, the tags that i bought had good adhesive. So, despite the irregular shape of the prints, they still stuck.

Step 4: Installation & Exhibit Instructions

Once I tested things out a couple of times, I packaged up the two sets of prints (complete with encoded NFC tags) and the original objects and handed them off to Jessica to take to the museum. They were included in Ramya’s part of the exhibit. Thankfully, Teresa Goforth (MSU Museum’s Director of Exhibits) took care of the the visitor instructions and helped test the setup in the exhibit.

I managed to visit the exhibit just before it was torn down. The prints were a central part of Ramya’s part of the exhibit. I was also happy to have gotten a generous shout out in the exhibit credits.

I have zero clue if anyone interacted with the prints and NFC tags at all. I need to follow up with museum staff to see if they received any feedback from visitors. I should also probably check the model views on Sketchfab. Though, there is no was to disambiguate and random views with those that came from the museum.

Takeaways

If I were to do this again, how would I iteratively improve? Well, for one, if I were to do this as more than just an experiment, I’d absolutely do the prints on an SLA printer. While FDM printers are very common (relatively speaking) that just don’t provide the level of resolution in a print that I feel is required for deeper visitor engagement. I’ve been thinking about investing in a mid range SLA printer for my lab (I’ve been eyeing the Anycubic Photon M3 Max). I would still have to address the issue of fumes/ventilation…something you generally don’t have to worry about with FDM printers but which is much more pressing when it comes to SLA printers (my lab is in a basement with no windows). There are an increasing number of “odor/fume-free” resins hitting the market (some of which are plant based, which is kinda cool) that might help solve this issue….but thats a challenge for another day. Second, I might think about a situation where the NFC tags where on the stand instead of the actual object. By doing this, I’m avoiding the problems I encountered with scanning an NFC tag that was awkwardly affixed to an irregular shaped object. Of course, this means that I’d be separating the the object from the means by which hyper-location based content is delivered to the visitor. If these were part of a permanent (or long term) exhibit, I’d absolutely consider getting these custom printed with something visual so that they weren’t just a boring, white circle….maybe a logo or a call to action? An NFC tag with “scan me” printed on it might be neat. Finally, if I were to use Sketchfab as the way to display the models again, I’d absolutely configure them as App-free Augmented Reality (AR) to that the visitor could view and interact with the models in their current environment.

Lets’s be 100% clear about this little experiment. It wasn’t particularly innovative or complicated. The moving parts were dead simple…something that anyone with some basic experience in digital heritage could do. Yes, the equipment I used was kinda high end (especially the Artec scanner). However, you could easily do the same thing using photogrammetry and a much less expensive printer. You could even take the printed objects out of the equation entirely and have the NFC tags close to the archeological objects. The user scans the tags with their phone and they can engage with the interactive 3D objects. The bottom list is that this whole process gave me an opportunity to do some hands on futzing with something that I’d been interested in for awhile awhile – NFC tags.

Leave a Reply

Your email address will not be published. Required fields are marked *

css.php