Machine learning (new faces)

I’m working on a side project, machine learning with neural networks.

My goal with this is getting 3D information out of 2D video – but that’s a super difficult problem so I’ve been trying out different techniques on simpler problems such as generating faces. Yes, generating photos of people that never existed is a – relatively – simple problem.

Above are a selection of random faces slowly being changed into a different set of faces, as generated by my code and neural network.

Each face is generated from just 100 numbers. Altering one number will make a subtle or not so subtle change to the resulting image. Altering twenty will completely change the face (that’s how many are being altered above).


BEGAN output

There are more impressive results out there, such as the BEGAN network (image above) that’s generating faces indistinguishable from real photos at a larger size. There’s the neural network that’s using a photo of a dress to generate a photo of a model wearing the dress. There’s a lot going on.


For anyone interested, here are some of the details about the coding, data and training I’ve used for the above.


I modified a version of this script to download enough images from Google image search, then went through and removed as many bad images as I could find. As there were around fifty thousand this took a while and was imperfect, but the results aren’t bad considering. Here, “bad images” means ones that weren’t faces, or faces that were blurry or obscured in some way.


I trained the NN on a laptop with an Nvidia 970m graphics card for several days to get to the point I could make the image at the top.

NN architecture

I used a WGAN, a GAN (Generative Adversarial Network) with wasserstein loss, which has been much more stable than my previous attempts with simpler techniques.


I wrote the program in python using keras with a tensorflow backend.

Further work

I’m currently working on a program to sort images into groups or categories by ‘similarity’ – here meaning that they have certain features in common. I decide which features by dragging each image into a category, the program then works out what makes them ‘similar’ and can filter the entire dataset based on that.

Once I’ve got that working I should be able to improve the quality of my training data and therefore the results I get at the end.


There’s room for huge improvement just by throwing better hardware at the problem. The latest Nvidia cards would allow larger images of higher quality to be processed at a similar rate to what I’m getting now, due to having approximately four times the memory and three times the number of processing units.

Flickering feet!

For a very long time I’ve been having problems with the fading feet in Touch of Darkness flickering as they fade. I’ve only just worked out why, and it’s not obvious so I’m posting here in case it helps anyone else with the same problem.

The issue was simple but only became obvious when at one point every foot flickered in sync (I think this was due to changing the fade algorithm).

I was using Create Dynamic Material Instance with an existing Dynamic Material Instance as input, when what I should have been doing was using a new Material.

Even though it allows a Dynamic Material Instance as an input to the node it appears to just use the exact same instance and apply it to the mesh.

Just use the Get Base Material node and use the output from that instead.



As Touch of Darkness is taking somewhat longer than expected to complete I’m having to do one or two smaller projects to get something out. The joys of having to pay for your own time, you know.

As such I’m working on a a simple puzzle game, Shuttorscape.


More coming soon!



Augmented Reality in Unreal Engine 4

A lot of people seem to think I make Unreal4AR. To be clear, I am just a user of Unreal4AR, I do not produce or provide it.

The site for that is

Here’s how to set up Unreal4AR from scratch (the documentation gets you to copy an existing small project in and I know some people like to start with a blank canvas).

It’s pretty much as the documentation says, except:

  1. Don’t copy the entire Content directory, just:
    • ARToolkit
    • Blueprints/
      • AR_ACtor_TOUCH.uasset
      • ARToolkitBase.uasset
      • ARToolkitBaseAdvanced.uasset
      • Markers.uasset
      • MarkersNFT.uasset
      • Mouse_GameMode.uasset
      • Mouse_PlayerController.uasset
    • Collections
    • Developers
    • Materials
    • Textures/
      • HUD_Custom.uasset
      • LoadingScreen.uasset
    • UI

In World Settings (Windows->World Settings, this appears in the right hand panel), set the Game Mode to Mouse_GameMode.

Drag an ARToolkitBaseAdvanced into the level (it should appear as a camera with a large square plane in front of it, the camera is the Virtual part of the AR view, the plane uses the BaseFeedScreenNoShadow material to display the real-world/video part of the AR view).

With the ARToolkitBaseAdvanced selected, in the Details panel change the “Auto Activate for Player” from Disabled to “Player 0“.

The important parts of the Level Blueprint are “Transform Camera”, “Actor Visibility”, and “Offset for NFT Marker” (assuming you’re using NFT markers – I am).

Be aware I’m also not using non-NFT markers so I’ve removed the relevant nodes for those.

think that’s everything I did to get it working, let me know if I’ve missed anything. Helpful images below.

"Transform Camera" and "Actor Visibility"
Transform Camera and Actor Visibility
Screenshot 2016-04-11 10.11.23
Offset for NFT Marker
ARToolkitBaseAdvanced settings
The ARToolkitBaseAdvanced Settings panel on the right showing the Auto Player Activation set to Player 0.
Screenshot 2016-04-11 10.16.04
The level screen, showing the World Settings panel on the right with the Mouse_GameMode, HUD_Custom and Mouse_PlayerController.



Now I’m past the worst of the coding (fingers crossed) I’m making steady progress on level building. After a handful of levels built comes the testing and feedback cycle. It’s good to be moving forward.


In Which I Become Overwhelmed

Edit: to make the role of working on low hanging fruit clearer. Also to mention the affect of isolation.

For tl;dr scroll down.

Over the past few weeks I’ve had a struggle with making any progress on ToD. This is about why and how that ended, in the hope of helping others avoid and/or get out of it when it happens to them.

To begin with I started losing momentum on progress – a couple of large issues came up that were hard to fix (including a bug that wouldn’t occur if the debugger was attached). It all slowed down. The lack of progress became disheartening, which in turn made everything seem that much harder. I knew I had to get these things fixed or there would be no game. Somehow, where I thought I was fairly close to having most of the main features of the game complete only a week or two before there were uncountable major issues that were going to take an incalculable amount of time to fix before I could even begin to make real progress rather than just make broken things work.

In the middle of this I was turned down for the EGX Leftfield collection which was disappointing – though being a bit down from the previous it was more of a knock than it should have been, especially when I was fully aware that competition for a place was fierce and ToD wasn’t even in alpha at that point, so it was hardly surprising.

Eventually I decided to temporarily ignore the big issues and move on to some of the ‘easy’ stuff so I could make progress – stuff that I would need to do further down the line but really wasn’t a priority. I spent a few days making some simple models, putting together the “picking things up” animation and the like – the low hanging fruit that’s easily dealt with. This allowed me to make obvious progress, which meant I felt like I was making progress, which improved my state of mind so I could think more clearly.

Two days later I came to realise that the ‘big’ problems of the past few weeks weren’t big, most of them were trivial and the largest was fairly simple – and I’d already mostly fixed it anyway.

I’m now back on track and progress is being made.

I hot desk in an office with a number of other developers – this allows me to get out of the house and see people a few days a week. It also lets me discuss ideas and get feedback. I can work on my own but after a lot of time in isolation it can become hard to keep going – socialising helps recharge me mentally and the moral support from others is invaluable. Part of the problem I had above was that I’d decided I was going to fix this major bug before I went back into the office. Obviously it took a long time and I deprived myself of my own support network at a time I needed it most, which made it even harder to get out of it.

In my head I’d felt that I couldn’t go in until I’d fixed it, I didn’t want to see the other developers in the office until I was making progress again. It was the same day I went into the office for the first time that I decided to do the easy work and started to actually make progress again.


If you start feeling overwhelmed by things, progress will slow and it’ll all seem worse which can spiral out of control. Backing off and chilling can help but won’t make you feel like you’re getting anywhere, the big problems will still be there when you come back – pick something easy to work on, make some quick wins and get back to a good place where everything seems that much more manageable.

I appreciate this won’t work for everyone and all circumstances, but thought it was worth putting it out there in case it helps anyone.


(really tl;dr below)

A large, particularly difficult problem arose that set me back and couldn’t be fixed by anything other than calculated guess work and a problem-solving equivalent of flailing around.

  • It got me into a place where somehow every single problem seemed like a major issue that I would be unable to fix.
  • This spiralled and became overwhelming, making everything seem even more intractable.
  • I was turned down for Leftfield, which was more than the disappointing it should have been.
  • Eventually I side-stepped it, moved onto easy stuff, made progress and within 2 days had a more positive view of everything, then managed to fix the largest remaining issue in an hour.

really tl;dr

A couple of hard problems blew up out of proportion, I became overwhelmed, I temporarily moved on to easy stuff to make progress and suddenly everything was back into proportion and progress was made. Yay.

Debugging woes!

A quick update, I’m working towards getting the last major bugs sorted and then I can get to the real work of level design.

I spent a few days last week tracking down a crash bug that wouldn’t occur if the debugger was attached. It’s painful but the only way to deal with those is work systematically through everything that could be wrong and fix it (there was no way to get output around the bug so even printing info to the screen didn’t help much). Bright side – I probably fixed a few things that would’ve blown up later anyway, so it wasn’t a complete waste of time.

The voice acting is mostly done now, I’ve been wanting to show that off in a short video clip but for some reason nothing seems to be able to record the audio on my machine – I’m guessing something’s got exclusive use of the output so nothing else can grab it, though it used to work, I don’t think I changed anything and I can’t get it to work after hours of trying. I’ll try again later.