Rails Girls Brisbane – wrap up

railsgirlsbanner

I’d seen posts about Rails Girls events in other countries and cities and I was excited to see that they were going to hold one here, in Brisbane. When Katie Miller (@codemiller) posted on our Girl Geek Dinners wall about the event I checked it out and saw they were looking for volunteers so I asked what I could help with and the answer came back “Do you want to help with mentoring?”. I’ve done many years of dev and bits of training and presenting, but haven’t actually done any Ruby/Rails. I often find the best way to learn is to teach/help others so why not!

1

The evening of Friday 24th came along fairly quickly. Tonight was installation night – where our goal is to get as many of the ladies setup and running on their machines to they can get to the more exciting stuff the next day. They’d not only filled to capacity of 50 but they also had about half as many again on the wait list!

2

We knew from the registrations we had a massive range of ages and experience. One of the questions on the form was “What OS are you running?” and at least one of the answers was “Laptop”. This was really going to challenge some of us I think. How to step back from our tech jargon and try and explain the bits we do every day to normal people.

3

After a bit of food and drinks to loosen up our nervous tongues we embarked on our mission to get people up and running.

4

My first victim (ahh..I mean participant) was Minh McCloy. She was a lovely lady, not affraid to ask questions and a keen citizen journalist. I’ll be interested to see what she writes up about the event as she was running around interviewing people everywhere. The RailsInstaller site has nice download packages for lots of different environments. I’d put everything I used to setup my lappy and surface with me on a USB, but was quickly reminded I’m not normal…I have new gear that has been running 64 bit for a long time. So my first task was to download the 32 bit installers for everything. Other than that we got Mihn going quick smart.

6

Next I helped a mother/daughter combination that had a mac and a windows pc. Interestingly the Mac download is 10 times bigger! Setting up everyone with a GitHub account proved a bit of a problem when they stopped us signing up as we’d hit our IP address limit of requests (oops).

5

Now that most people had settled in, Adrian from Enova opened the night and set the scene for what we’d be doing tonight and Saturday. One of the main goals was to have fun!

7

To get a bit of a group dynamic going we started with the “Marshmellow Challenge”. Take a pack of spaghetti and a bag of marshmallows and see how high you can build a structure.

8

So with about 1/3 of the “recommended ingredients” our group started off our tower. As we got some height we needed a bit more bracing on our lower levels.

9

We opted for the double spire at the top to cheat our way into that last bit of height. Note for future – the marshmallows get a bit squishy and move the more you touch them and causes some twisting in your structure.

10

There was a definite triangle theme going on around the room.

11

I think these ladies had the neatest tower – and might have copied our spire idea 😉

12

To finish off the opening we did a Friday Hug photo. Unfortunately I didn’t have my wide angle with me so had to do a stitch. It’s gotta be one of the most out of focus photos I’ve taken for a long time as I didn’t really prepare very well for it. But since the resolution here is so small you don’t notice it so much. Now how often do you get a photo with this many women in it when it’s something to do with tech? Not often…

13

The rest of the night we spent setting up more computers and making sure everyone was good to go for Saturday. You don’t realise how quick your computer and especially your SSD drive are until you go and work on some of these older laptops that have “normal drives”.

14

Luckily there was plenty of drinks and people to talk to while it was all getting done.

15

Bright and early Saturday morning we wall started arriving. Not too long after we had a full room of people! You always worry about losing a bunch of people on day 2, but I think we gained a few. This is a great sign when people come back the second day. We spent the morning verifying everyone was set up to go and working on some of the more problematic machines i.e. ones without admin access, ones that had corrupted downloads etc. Some people were so keen for this event we had one lady (Tracy Mu Sun @tracymusung) who’d flown all the way up from Sydney to attend.

16

The morning was spent with a crash course on the internet, html and IRB. Here we covered a crash course in data types – strings, int, float, methods etc. For me it was time to sit up the back and quietly ask the other mentors who know rails lots of little questions on the finer details like …so what does 10 == 10.0 return true.

17

The next big section was letting the ladies have a go at their own pace with TryRuby. I’d had a go at this site and it’s really nicely laid out to let you have a go at commands on one side and read instructions and background information on the other. Talking to a bunch of participants, they really seemed to enjoy going through this one. It let them go through at their own pace and then ask questions if they needed something explained to them in a bit more detail. The front row you can see in the picture was lucky enough to have Matt Connolly (@matt_connolly) set up to be their personal help desk for the day.

18

It was about 11:30 and we asked if the ladies wanted to break for lunch a bit early and it was a resounding NO! We want to learn more! After some more learning and some much need brain re-fueling at lunch it was onto the really exciting part – making a photo website.

19

This is where being “mentor rich” really came into it’s own. It meant that there was at least someone to help per row and help was never very far away.

20

It allowed the ladies to work along at their own pace, get help or ask more in depth questions as they went along. Just remember, we had people from all walks of life – teachers, nurses, book-keepers, lawyers etc.

21

I think my favourite site was seeing them helping each other!

Overall I think the day was a great success. I heard some great feedback on the day, but also seeing some great stuff on twitter and facebook:

Jeya Karthika : @ItsJkTweeting: My first rails app is up & pushed to github as well. Wow. Thanks to all mentors, sponsors and @RiverCityLabs for this awesome #RailsGirlsBNE

Dayle Parker – Thanks to the mentors and everyone who organized this awesome event! It was a blast! 🙂

Kerry Kerry K – I very much appreciated the non judgemental enthusiasm, I lost my way for most of the day- I would like/ need to do it again , sit up the front where the white board is visible. Wow did the organisers expect this much interest " It was a geek girl stampede. " <hello ‘my’ world>

 

30 31 32 33 34 35 36 37 38 39

I was really impressed at the patience and enthusiasm the mentors put into the event. They truly seemed like a bunch of really nice people…even if they did pay me out for being a .Net developer.  Thanks guys for letting me infiltrate the group for the day. These people gave up their time on their Friday night and Saturday to share their love of Ruby with these ladies and did a fantastic job. Here’s a big thumbs up to Nigel, Dan, Adrian, Katie, Robert, Jason, Matt, Nick, Odin, Rob Dawson, Damien and everyone else I didn’t mange to get a photo of!

There’s a Facebook group you can join to stay in touch and hear about the event (due to popular demand) already in planning for later in the year here: http://www.facebook.com/groups/462831463794656/

Hopefully they’ll see many of this weekend’s participants return to learn the next level of Ruby Dev!

Edit: If you don’t believe me it was a successful day, here’s a couple of blogs from some of the participants:

Tracy Mu Sung @tracymusung: http://www.tracecode.com.au/blog/ruby/rails-girls-brisbane/ . “Unlike the Geek Girls events I have been to in Sydney, everyone was really, really friendly. I wonder if Brisbane is friendlier than Sydney?”

Jeya Karthika @ItsJkTweeting: http://freshsqueaks.com/railsgirls/#.UaKbqk5-9DF . I love her description: “Vibrant Atmosphere. Friendly Mentors. Excited Women. Friday Hugs. Witty Questions. Fervent Coding. – This is how I would describe the event.”

 

Kinect For Windows Interactions Gallery – Interaction Stream

Now that we’ve outgrown all the controls in the Interaction Gallery and we’ve exhausted all the ways we can use KinectRegion to make our own controls we get into the thick of it and look at the InteractionStream itself. The cool thing is we can use the stream for non-WPF applications. So you could write a console app or XNA using the stream data to perform functions. Here we can get data of multiple users, and both hands and what states they are in.

Ben from Microsoft has already written a great post on how to use the data here. So instead of basically repeating him, I’ll let you read that and just note a few things that struck me when I first used it.

1. It doesn’t follow the same pattern of the other streams – Most of us are used to enabling our stream and handing it’s frame ready event.  It looks a bit different:

_interactionStream = new Microsoft.Kinect.Toolkit.Interaction.InteractionStream(e.NewSensor, new MyInteractionClient());
_interactionStream.InteractionFrameReady +=InteractionFrameReady;

Firstly, we keep a copy of the stream, and we have to give it something the implements the IInteractionClient interface.

2. The InteractionFrameReadyEvent isn’t “it” – normally for other streams like depth we’ve handle depthframe ready and we’re all happy. With InteractionStream, if you only do that nothing useful happens…To get anything useful, you will need to hook up both the SkeletonFrameReady and the DepthFrameReady events. In the skeleton the key bits are:

skeletonFrame.CopySkeletonDataTo(_skeletons); 
var accelerometerReading = _sensor.AccelerometerGetCurrentReading();  
_interactionStream.ProcessSkeleton(_skeletons, accelerometerReading, skeletonFrame.Timestamp); 

We need to copy the skeleton data somewhere for later, and tell the interaction stream to process the skeletons.

In the depth stream it’s similar:

_interactionStream.ProcessDepth(depthFrame.GetRawPixelData(), depthFrame.Timestamp);

We need to tell the InteractionStream to process the depth data

Technorati Tags:

Kinect For Windows Interactions Gallery – Kinectify my own control

Now that we’ve looked at the existing controls in the Interaction Gallery. What happens now we want some other control “Kinectified”. Generally speaking I think you can get most of what you want without going direct to the interaction stream and getting the events and properties that are exposed in the KinectRegion.  So for our example we’re going to make a “Kinectified” CheckBox.

If you look at Microsoft.Kinect.Toolkit.Controls.KinectButtonBase it will give you pretty much everything you need for this.

public class MyCheckBox : CheckBox
{
    private static readonly bool IsInDesignMode = DesignerProperties.GetIsInDesignMode(new DependencyObject());
    private HandPointer _capturedHandPointer;

    public MyCheckBox()
    {
        if (!IsInDesignMode)
        {
            Initialise();
        }

    }

    private void Initialise()
    {
        KinectRegion.AddHandPointerPressHandler(this, this.OnHandPointerPress);
        KinectRegion.AddHandPointerGotCaptureHandler(this, this.OnHandPointerCaptured);
        KinectRegion.AddHandPointerPressReleaseHandler(this, this.OnHandPointerPressRelease);
        KinectRegion.AddHandPointerLostCaptureHandler(this, this.OnHandPointerLostCapture);
        KinectRegion.AddHandPointerEnterHandler(this, this.OnHandPointerEnter);
        KinectRegion.AddHandPointerLeaveHandler(this, this.OnHandPointerLeave);

        KinectRegion.SetIsPressTarget(this, true);
    }
}

Here we create our own checkbox based on the normal checkbox.

We intialise our control and grab the events we care about from the KinectRegion we will sit our control inside of so they are passed through and we can handle them.

We want the checkbox to act like the buttons, so we make sure we set the SetIsPressTarget to true.

private void OnHandPointerLeave(object sender, HandPointerEventArgs e)
{
    if (!KinectRegion.GetIsPrimaryHandPointerOver(this))
    {
        VisualStateManager.GoToState(this, "Normal", true);
    }
}

private void OnHandPointerEnter(object sender, HandPointerEventArgs e)
{
    if (KinectRegion.GetIsPrimaryHandPointerOver(this))
    {
        VisualStateManager.GoToState(this, "MouseOver", true);
    }
}

The HandPointerLeave and Enter are similar to a mouse leave/enter. As we have two hands, we first ensure the hand over the object is the “Primary Hand” before we change the look and feel of the control.

private void OnHandPointerLostCapture(object sender, HandPointerEventArgs e)
{
    if (_capturedHandPointer == e.HandPointer)
    {
        _capturedHandPointer = null;
        IsPressed = false;
        e.Handled = true;
    }
}

private void OnHandPointerCaptured(object sender, HandPointerEventArgs e)
{
    if (_capturedHandPointer == null)
    {
        _capturedHandPointer = e.HandPointer;
        IsPressed = true;
        e.Handled = true;
    }
}

private void OnHandPointerPress(object sender, HandPointerEventArgs e)
{
    if (_capturedHandPointer == null && e.HandPointer.IsPrimaryUser && e.HandPointer.IsPrimaryHandOfUser)
    {
        e.HandPointer.Capture(this);
        e.Handled = true;
    }
}

For the Capture and Lost capture we want to grab a reference to the hand pointer to ensure we’re checking the state of the same hand and setting whether we’re in a pressed state correctly.

When we detect a press, we want to ensure it’s the primary hand of the primary user before handle the event.

private void OnHandPointerPressRelease(object sender, HandPointerEventArgs e)
{
    if (_capturedHandPointer == e.HandPointer)
    {
        if (e.HandPointer.GetIsOver(this))
        {
            OnClick();
            VisualStateManager.GoToState(this, "MouseOver", true);
        }
        else
        {
            VisualStateManager.GoToState(this, "Normal", true);
        }

        e.Handled = true;
    }
}

For the pressrelease – this would be similar to a left mouse up – we only want to fire the onclick when they let the mouse go – here it’s when they let go of the control.

In this case – we need to see where their hand is. If it’s over the control we fire a click, but if they’ve moved off they’ve effectively cancelled the click so we go back to a normal state.

Now we can put our control into KinectRegion and see it in action. Note – I haven’t changed the style of the checkbox here. You’d most likely want to make the actual check box much bigger or more like a toggle switch look etc to make it easier to press.

<k:KinectRegion Name="KinectRegion" Height="350" VerticalAlignment="Top">
    <interactionStream:MyCheckBox VerticalAlignment="Center" HorizontalAlignment="Center" Margin="0,300,0,0"/>
</k:KinectRegion>

When we run we now get the hand pointer with indicators for hover, press, release and importantly the checkbox state changes.

check_hover

Hovering over the checkbox

check_press

Pressing / Checking the checkbox

check_checked  

Checked Box.

 

Using these principals, you should be able to make all the one handed/single person controls you need.  When you need two hands or two people you’ll need a bit more thought and more custom code of how to deal with and visually indicate what’s going on.  In that case you should look more into the Controls Projet like KinectRegion, KinectCursor and KinectAdapter.

 

Technorati Tags:

Kinect For Windows Interactions Gallery – KinectScrollViewer

The next control I want to touch on in the Interactions Gallery is the KinectScrollViewer. Now that we’ve learnt about the Regions, and button controls it’s time to see how we handle putting a lot of them on the page. 

 

Let’s first put the scrollviewer on the page and to make it obvious when we’ve hit it, we’ll set a bright colour for the hover:

<k:KinectScrollViewer HoverBackground="YellowGreen"></k:KinectScrollViewer>

scrollviewer_normal

Normally, it looks no different, but when we hover our hand over the screen the scrollviewer turns “YellowGreen”

scrollviewer_hover

I can move my hand around the screen etc. If I close my fist, the hand changes to indicate I’m “gripping”

scrollviewer_grip

This is all good, but I have to content, so there is no scroll at the moment. Let’s add a bunch of KinectButtons and see what effect this has.

First I’ll add a horizontal scroll bar in the centre of the screen:

            <k:KinectScrollViewer HoverBackground="YellowGreen" VerticalScrollBarVisibility="Disabled" HorizontalScrollBarVisibility="Auto" VerticalAlignment="Center">
                <StackPanel Orientation="Horizontal" Name="buttonContent"></StackPanel>
            </k:KinectScrollViewer>

And then we’ll add 100 KinectTileButtons to the scrollviewer in codebehind (just for the sake of simplicity)

for (var i = 1; i < 100; i++)
{
    buttonContent.Children.Add(new KinectTileButton {Content = "Button " + i, Margin = new Thickness(0,40,40,40)});
}

When we run this we see the items as per normal:

scrollviewer_itemnormal

The colour change when we hover over the scrollviewer:

scrollviewer_itemhover

The grip indicator turn on when we close our fist

scrollviewer_itemgrip

Then we are free to move the items slowly with a closed fist or to move quickly through the list by moving our fist fast and letting the momentum move it for us.

I found the grip and drag a bit odd at first, after being used to hover and hold from my XBox. After the initial difference it’s much easier to get quicker at using it and be scrolling all over the screen like a pro.

So that’s all you have to do to hook up simple controls that work with the interaction stream out of the box.  If you need to do more complicated thing in your own controls and get to properties of the hands etc. you need to access the interaction stream directly.

Technorati Tags:

Kinect For Windows Interactions Gallery – KinectTileButton and KinectCircleButton

The next controls I want to touch on in the Interactions Gallery are the KinectTileButton and KinectCircleButton. Now we have our KinectRegion and can see what we’re doing with the UserView control, we can start interacting.

In its simplest form, the KinectTileButton still has a lot of visual features. Simply add it to your XAML and run:

        <k:KinectRegion Name="KinectRegion">
            <k:KinectTileButton />
        </k:KinectRegion>

TileButton_normal

In its default form it’s a giant purple button that you can easily put your hand cursor over.  With the interactions the team has added the ability to detect a “push” gesture from either hand. As I start to press in with my hand the cursor changes to indicate that I’m pressing

 TileButton_pressing

The cursor becomes more full of purple lines until I’ve pushed in to trigger a button click and then it changes colour completely.

TileButton_pressed

We can do other normal button things like set the title and the background of the button:

 

 <k:KinectTileButton HorizontalLabelAlignment="Left" Label="Press Me" LabelBackground="AliceBlue"/>

tilebutton_label

The KinectCircleButton is almost identical, except it provides a circular button rather than the boxy metro type.

        <k:KinectRegion Name="KinectRegion">
            <Grid>
                <k:KinectTileButton HorizontalLabelAlignment="Left" Label="Tile Button" HorizontalAlignment="Left" VerticalAlignment="Center"/>
                <k:KinectCircleButton HorizontalAlignment="Right" VerticalAlignment="Center" Label="Circle Button"></k:KinectCircleButton>
            </Grid>
        </k:KinectRegion>
circleButton 
 
Technorati Tags:

Kinect For Windows Interactions Gallery – KinectUserViewer

The next control I want to touch on in the Interactions Gallery is the KinectUserView. Now that we know that our Kinect is plugged in and working, most of us want to give the user some indication of where they are in relation to the Kinect to ensure they are the correct distance and position from the Kinect for the things we want them to do. We’ve tried a few things in the past:

1. We made a little WPF skeleton that we’d overlay over some of our interactions. This was interesting to play with but tended to be a bit distracting.

2. Use the depth camera to give a silloute effect and give it some colour. We found this was much more useful as it resembled the user so they were more easily able to tell that the Kinect had recognised them and not the guy standing to the side etc.

I’ve seen various incarnations of this by other people also, so it’s good to see we now have a standard control that we all don’t have to write from scratch. To add this control we continue on from where we left off with the KinectRegion.

XAML:

Add the UserViewer Control and bind it to our Kinect Region:

<k:KinectUserViewer k:KinectRegion.KinectRegion="{Binding ElementName=KinectRegion}" />
Run the project and we can see ourselves:
userviewer_full 

Don’t like the colours for the user..then we can change these really easily with DefaultUserColor or PrimaryUserColor:

<k:KinectUserViewer k:KinectRegion.KinectRegion="{Binding ElementName=KinectRegion}" PrimaryUserColor="DarkCyan" DefaultUserColor="Crimson"/>

 

userColour1 userColour2

 

Now taking up the whole screen could be good to start but probably will get in our road fairly quickly. So we’ll just move the control up out of the way in top right in the XAML
<k:KinectUserViewer k:KinectRegion.KinectRegion="{Binding ElementName=KinectRegion}" VerticalAlignment="Top" HorizontalAlignment="Left" Height="50"/>

Now when we run, we get an idea where we are, and it doesn’t get in our way for other things.

userviewer_small

 
Technorati Tags:

Kinect For Windows Interactions Gallery – KinectRegion

The next control I want to touch on in the Interactions Gallery is the KinectRegion Control. It’s a canvas for the other Kinect controls and is associated with a particular sensor.

To add it:

XAML

<k:KinectRegion Name="KinectRegion"></k:KinectRegion>

Code behind:

1. Create our sensor changed event

 _sensorChooser.KinectChanged += SensorChooserOnKinectChanged;

2. In our event associate the control with the sensor

 KinectRegion.KinectSensor = e.NewSensor;

When we run this the first time, you’ll likely get an error like:

Unable to invoke library ‘KinectInteraction170_32.dll

To fix this we need to add two linked files: in our project that are found in:

C:\Program Files\Microsoft SDKs\Kinect\Developer Toolkit v1.7.0\Redist\amd64\KinectInteraction170_64.dll

and

C:\Program Files\Microsoft SDKs\Kinect\Developer Toolkit v1.7.0\Redist\x86\KinectInteraction170_32.dll

To do this, in the root of the project, right click and Add Existing Item

AddItem

Choose the dll from your file system and Add As Link

AddLink

Once the item is added, make sure you set the Copy to Output Directory to Copy If Newer

When we run this time the project doesn’t crash but we don’t see anything on the screen. We need to make sure we have both the Depth and Skeleton streams enabled. So now we add some standard sensor code and enable the streams and settings we want. I’m going to enable near mode and skeleton tracking of seated to make the demo easier. You can find this boilerplate code in most of the samples, but I’ll add it here as an example also.

private void SensorChooserOnKinectChanged(object sender, KinectChangedEventArgs e)
{
bool errorOccured = false;
if (e.OldSensor != null)   
{
    try
    {
        e.OldSensor.DepthStream.Range = DepthRange.Default;
        e.OldSensor.SkeletonStream.EnableTrackingInNearRange = false;
        e.OldSensor.DepthStream.Disable();
        e.OldSensor.SkeletonStream.Disable();
    }
    catch (InvalidOperationException) 
    {  
        // KinectSensor might enter an invalid state while enabling/disabling streams or stream features.  
        // E.g.: sensor might be abruptly unplugged.
        errorOccured = true;
    }  
}
if (e.NewSensor != null)
{
    try
    {
        e.NewSensor.DepthStream.Enable(DepthImageFormat.Resolution640x480Fps30);
        e.NewSensor.SkeletonStream.Enable();
        try
        {
            e.NewSensor.DepthStream.Range = DepthRange.Near;
            e.NewSensor.SkeletonStream.EnableTrackingInNearRange = true;
            e.NewSensor.SkeletonStream.TrackingMode = SkeletonTrackingMode.Seated;

        }
        catch (InvalidOperationException)
        {
            // Non Kinect for Windows devices do not support Near mode, so reset back to default mode.  
            e.NewSensor.DepthStream.Range = DepthRange.Default;
            e.NewSensor.SkeletonStream.EnableTrackingInNearRange = false;
            errorOccured = true;
        }
    }
    catch (InvalidOperationException)
    {
        // KinectSensor might enter an invalid state while enabling/disabling streams or stream features.  
        // E.g.: sensor might be abruptly unplugged.  
        errorOccured = true;
    }

    if (!errorOccured)
    {
        KinectRegion.KinectSensor = e.NewSensor;    
    }
}
Now when we run, we see the hand cursor.
cursor 

Now we’re ready to add controls that work with the cursor!

 

Technorati Tags:

Kinect For Windows Interactions Gallery – KinectSensorUI

When we look at the range of controls in the Interactions Gallery , the first one I wanted to highlight that you’ll see in the Interactions Gallery is the KinectSensorUI.

It’s a nice little control that shows you the status of your Kinect. It gives a nice consistent way to visually indicate to the user there’s something wrong and gets around the question of “is this thing on”. It’s also really easy to add it its most basic form.

1. Add the control to your UI

<k:KinectSensorChooserUI HorizontalAlignment="Center" VerticalAlignment="Top" Name="SensorChooserUi"/>

2. Add it in the code behind

        private KinectSensorChooser _sensorChooser;
        public MainWindow()
        {
            InitializeComponent();
            Loaded += MainWindowLoaded;
        }

        void MainWindowLoaded(object sender, RoutedEventArgs e)
        {
            _sensorChooser = new KinectSensorChooser();
            SensorChooserUi.KinectSensorChooser = _sensorChooser;
            _sensorChooser.Start(); 

        }

When you run this, it gives you a visual indicator of the status of your kinect and hovering over the control gives a little more detail e.g.

No Kinect normal and then in hover:

nosensor_smallnosensor_hover 

Once your plug your kinect in:

initialising_smallinitialising_hover 

When your kinect is connected:

connected_smallconnected_hover 

There’s also a property you can set to show if the Kinect is listening. This one is great if you want a visual indication to the user that the microphone is on or we’re waiting for an audio response.

You can set this on the UI control

<k:KinectSensorChooserUI HorizontalAlignment="Center" VerticalAlignment="Top" Name="SensorChooserUi" IsListening="True"/>
or by setting the property in code behind:
SensorChooserUi.IsListening = true;
When you run and connect the control looks a bit different.

isListening_hover isListening_small

 

Technorati Tags:

Kinect For Windows Interactions Gallery

interactionsGallery

As the Kinect For Windows SDK has started to evolve, the team has been adding some nice little controls which are quite useful and also controls everyone was writing in one way or another to solve the same issues. I think it’s a really good step so we’re not all spending a bunch of time writing similar controls plus it means there should be some consistency going forward if people use the supplied controls. This will help users with the learning curve with many of the applications.

When you first look through the interactions gallery it’s a bit overwhelming as there’s a bunch of controls and the interaction stream to deal with all at once. For this reason I wanted to do a set of posts so we can concentrate on them one at a time.

 

Technorati Tags:

Kinecting The Dots – Interactions with the Kinect SDK – BNE 28 May

I’ve been hanging out until the 1.7 release of the Kinect SDK to show off a bunch of the improvements and features in the Kinect SDK. This month we’re presenting at the Brisbane .Net User Group. Planning on taking you through a bunch of the new features and go into a deeper look at the Interactions Gallery and its controls aswell as showing a few fun demos. Hoping to make the session quite interactive so come prepared to get out of your chairs. Details of the session:

Kinecting The Dots – Interactions with the Kinect SDK

The Microsoft Kinect has come a long way since its release in November 2010, with the Kinect for Windows SDK and device released in February 2012. In this session Bronwen and John will take you through some of the latest features in the 1.7 SDK release, and delve into the Interaction Gallery looking at some of the Kinect controls and interactions to help you build better navigation and engagement in your next Kinect application.

Where: Brisbane .Net User Group – Microsoft Office – Level 28 400 George Street Brisbane

When: 28 May 2013 – 6pm onwards

Register: here

Phone Gap with Android – Hello World App

 Awhile ago we started building an Bing Maps app for a client using Phone Gap and initially targeting iPhone. Now that we’re happy with the functionality we’re looking at the Android version. Today I was trying to get to step one…setup the environment and have a “hello world” app run.

I learnt/reinforced in my mind a few things today:

  • Skimming instructions and not following ever step precisely = more pain than could be gained by rushing in.
  • Just cause something isn’t working doesn’t mean I’m an idiot – it might actually not work.

To start, I needed to set up my computer with the right tools for developing in Android. The PhoneGap site has a good overview on how to do this. My biggest recommendation here is to read the instructions CAREFULLY and not skip over steps or skim through it. I faced a few issues by jumping the gun a bit. While i could call “java” from a command prompt the %JAVA_HOME% variable wasn’t set correctly so it took me a while of scratching my head before I retraced my steps and did it properly.

What struck me coming from a environment like .Net again was how 80’s the setup all felt. Here I was setting ENVIRONMENT and PATH variables – cutting and pasting paths out of explorer like I did many years ago.

After I created the blank project running up the blank application was easy the first time. It’s cool with the number of devices you can configure to deploy to. My first problem was that I couldn’t seem to run my app more than once on the same emulator. I tried keeping it open, I tried closing and re-deploying. The only thing that consistently worked was deleting, creating and running. This is really time consuming as the emulator takes quite awhile to spin up the first time.

logcat

This can’t be right, I’m clearly doing SOMETHING wrong. I retrace my steps and can’t see anything obvious I’ve done wrong. In my LogCat window I’m getting the above “Unexpected value from nativeGeoEnabledTags: 0” filling up my log window but only the 2nd time I try to use an emulator. After a bit of searching I find a few other people have had the same problem here and here. I tried installing to different directories, uninstall / reinstall and still came across the same issue. I don’t have a real device so I take one of the suggestions and run the device as an ATOM rather than INTEL and “tada” I am actually able to run my blank app twice in a row.

It is interesting to try out these other developer environments I either haven’t used or haven’t used in many years and see what they are like and experience their pain points as a “newbie”. It definitely makes me appreciate the tooling I have in Visual Studio.

 

Technorati Tags: ,