Creating a Diprotodon for HoloLens–Dippy is Born

dippy

I recently presented at Ignite Australia on Lessons from Hollywood: Building the interactions of tomorrow today . For that talk and for fun I wanted to create something from scratch rather than use assets someone else made. Now I have 0 graphical ability but fortunately my brother Adrian did get those genes instead.

My brief was I wanted a model I could use in HoloLens. Something a bit different to the cat, dog, ballerina etc that you see all the time. Would also be nice to have an Australian theme to it incase I want to show this overseas there’s an extra element of Aussie about it Smile

He suggested a Diprotodon (essentially a giant prehistoric wombat) which is the largest known marsupial ever to have lived.

dippy model

Stage 1 – The Model.

First he built the model and exported it with a decent poly count as an fbx. I import that to my unity project, pick a point in space to place it and ta-da diprotodon model. At this point I thought he looked pretty good and it was time to name him (the hardest thing in software development) so what’s more Aussie than shortening his title – so we went with Dippy

dippy textture

Stage 2 – Textured Model.

He looked pretty good and I sent through some pics I took from the HoloLens so  then Adrian went away and created a textured version – you can see fur and nose texture here but no colour etc.

dippy

Stage 3 – Adding some colour

Next we wanted to make him look like a wombat so Adrian created the “maps”. 4 files that I had no idea what to do with (cause I’m a dev not a designer).

I went through a few different shaders and with some chat help we decided the most appropriate was the Standard (Specular)

and then matched them up to the maps he’d given me

image

I had to cheat and look up what some of these meant to help with the guess work but here’s my non-designer summary:

Albedo – base colour or diffuse map. Defines the colour of diffused light. This one I matched up to the dip_diffuse or as I like to call it the coloured fur map.

image

Specular – for shininess and highlight colour. I used his naming to initially match this one to dip_specular.

image

Occlusion – greyscale or as i call it the black and white dippy which matched to dip_ao.

image

Normal – allows you add some detail like bumps, scratches etc. This matched up to dip_normal

image

My final shader looked like this.

image

He looked pretty cool at this point but there was  a bit of a shimmer. We tried a few things like dropping poly count but then i noticed it on most holograms – even the simple cat in the Holograms app. So I left it for a bit.

dippy scanline

Stage 4 – Scanlines

For my demo, I wanted to show 2 different ways of using the same model but with different “definition”. In this case I wanted the same polycount but different maps – 1 with detailed fur and another will something simpler so you knew it was a diprotodon but didn’t need the detailed colour.

image

This time I used the  HoloToolkit Standard Fast Shader and dropped the dip_scan_line map onto albedo, emission and detail.

dippy before

Stage 5 – Fixing the shimmer

Presentation done but the shimmer was still bugging me. Took me a bunch of goes to capture it but you can see it on a few of the pics above but I think this one highlights it best. The Dippy on the left to me really stands out with the edges.

So I took advantage of the Unity guys being at Ignite and showed them. I really wanted to know if it’s something we should have done at the model level, the Unity level or even a code level. John suggested the Anti-Aliasing settings for the project.

Normally all the doco points to picking “Fastest” / “worst” and nothing else and anti aliasing is set to Disabled by default.

image

We made the simple change to set Anti Aliasing to 4x Multi Sampling. Re-deployed that and wow what a difference that made

dippy after

That shimmer is gone!

Using Sentiment Analysis To Enhance My Travel Booking Evaluation

packed car

Over the Xmas break we spent some time in Falls Creek. We were driving so we could take our bikes, clothes, nutrition etc with us. It was a bit too far for us to do the drive in one hit (event with 2 drivers) so we decided to do it in two hits. The first day of about 1300ks.

This meant we needed accommodation for the first night which presented the following challenges:

  • We’ll be tired after 1300ks of driving
  • Not sure what the traffic will be like so could likely be arriving at night
  • Xmas day – lots of things closed and possibly lots of traffic

So I booked something that was available with a decent rating.

rating distributionrating

Then after the mad rush to gather all the bits we needed to book to do this trip I delved a bit deeper into the hotel I’d chosen.

5review

5 star reviews and responses looked ok

negativerewiew

But a few of the 1 star reviews and the responses were slightly alarming. Based on the fact that it was Xmas day, we’d be tired and likely arriving late meaning if anything went wrong we’d be sleeping in the car I promptly cancelled and booked the next cheapest place. But was I wrong to do this?

It got me thinking about how I could have analysed this data better in a more scientific way. I thought the Cognitive Services API could help me with this.

First, the site didn’t provide a feed so made this a bit harder but if you had this data yourself it’d be a lot easier….

I grabbed all the reviews and looked at what the Sentiment Analysis and Key phrase extraction could help me with.

review1sentiment

I did a manual test first – Took 5* review and checked the overall sentiment and the types of keywords.

1starsentiment

Then i did the same with a 1 star review.

All of the 110  reviews are in English so to analyse all of the records I only needed 2 apis:

Then I created a small console app to call sentiment:

callingsentiment

and after verifying that worked, extended to run sentiment on both the review and response to use the key phrases also.

So what did I find?

  • Average review sentiment was 70% (110 results)
  • Average response sentiment was 85% (24 results)

Top 5 key phrases

  • Room
  • Restaurant
  • Wifi
  • Bed
  • Staff

So could have kept the booking? What about the people that gave it a bad review – are they always negative?

reviewermissinginfo

So I looked at one of the 1 star reviewers and she’s generally been a decent star reviewer. I went to all of her reviews and ran sentiment analysis on them and found she was 82.6% form 7 reviews.

To me it seems to not be a negative reviewer by nature maybe just had bad luck.