Hey Danny, How Do I Win a Hackathon?

Billal Patel
Dunelm Technology
Published in
7 min readJun 22, 2022

--

Written by Emile Brand, Billal Patel, Rickpickles and David Weatherall

Image: POAP for Hackathon 2022

Imagine being able to browse a retail website purely through voice control. Well, that is what our team attempted in the latest Hackathon at Dunelm. It was risky, challenging and quite difficult but well worth it because… spoiler alert: we won the whole thing!

This was the first hackathon since the hybrid working policy and as a result, the first-ever remote hackathon but this didn’t stop us as there were close to 100 participants randomly split across eight teams.

With about a day and a half to create and present our idea, our team was made up of 5 developers, 2 QAs, 1 Delivery Lead and 1 UX. Funnily enough, many of us had never even met each other before and to top it all off, we were the largest of all the teams.

Approach

Before being crowned as Hackathon 2022 champions, we had a lot of steps to go through. The first being when we got together for the first time on our initial (rather awkward) Teams call. During this, we nominated Emile as the leader. Mostly because he asked, “who should lead this then?” - He gave a little fight but it was clear from early on that he would be a great choice.

Then came the small issue of selecting a hackathon idea. We had a wide range of suggestions with many of them linked to accessibility so we held a few rounds of voting and eventually settled on a voice-controlled user journey. We imagined it to be a unique customer experience that would only require speech to operate the Dunelm website.

Then it was time to select a name as “Team 5” would no longer cut it. After plenty of funny (and some not so funny) suggestions, Lisa proposed the name ‘Successability’ and although it sounds a lot like one of those corny names found in The Apprentice, we couldn’t help but love it.

It was finally time to determine how we would tackle this new idea. We knew that we couldn’t revamp the entire website for this new voice-controlled journey in just one day. So we agreed on an achievable, yet worthwhile MVP for our new website experience.

This consisted of a user experience which would allow users to search for a product and add it to their basket. Our UX designer, Drew Tavernier explained how the current website is optimised for clicking, scrolling and swiping. Therefore we would need a new, clean interface exclusively for the voice-controlled journey. So in record time, he produced these beauties:

Image: Initial UI/UX Designs

Challenges

To tackle the task, we agreed to split into a front-end team and a back-end team. We had a variety of skills within our team so rather than a straight split of engineers specialising in these areas, it was left up to each individual to join whichever sub-team they preferred. This not only made sure everyone had freedom but it also meant that we could switch between teams as we pleased which enhanced the communication throughout.

Most people see a challenge and will do anything to avoid it, but we love it here at Dunelm. Although many of us were unacquainted with each other, this was one of the best team get-togethers many of us have ever been a part of. Some might say that having the largest group helped, but that had its problems as we had the greatest variety of people.

This was amplified by the fact that a few of the engineers had never touched the frontend codebase. Also, some of the team had only started a few weeks earlier and some had never written code before. Yet, we took that in our stride, and we banded together to ensure we all got something out of the experience. We did this by breaking down the sections we worked on and getting everyone on the same page. So we started with the mobile site as it was most fitting for accessibility-related improvements.

Another issue we experienced was setting up the codebase on different team members’ computers as there was a mix between PC, and MacBooks (M1 and older Intel versions). We even had an engineer running it on Edge, yes Edge on a Mac!

A good example of the team coming together was when David’s computer didn’t want to render the site anymore and he couldn’t test the voice control. He then pushed the code up for someone else to test and give feedback so that we could make the necessary improvements in an agile way.

A different challenge we had was “what can we learn from other natural language services out there?” We recognised that Amazon and Google both have a natural prompt which begins with “hey” or “hi”. So we followed the same approach and used the prompt “hey Danny” which only aligns with our brand name but it also sounds somewhat friendly and inviting.

Tech

One of the main requirements for a hackathon is to use the most fun and cutting edge tech possible. Our initial plan involved using AWS Polly and utilising some natural language processing (NLP) to facilitate a two-way verbal conversation with the ‘AI’. The result was a bit more simplistic and our high-tech artificial intelligence solution turned into a bunch of “if statements”.

As this was going to be a web-only solution, the best (easiest and quickest) solution was to explore what was already baked into the browser. Googling “Speech To Text JavaScript” and clicking the first result was pretty much the R&D phase. This led us to the Web Speech API, or more specifically the Speech Recognition API which would turn out to be the core tech behind Danny. Basic Browser Support was deemed “good enough” for a one-day hackathon - does anyone even use “Firefox”?

Once we hooked up the basics of the library we were able to program Danny by passing in an array of commands and their callbacks when initialising it like so:

{
command: ‘Hey Danny, show me *’,
callback: searchResults => setSearchText(searchResults),
},
{
command: ‘show me *’,
callback: searchResults => setSearchText(searchResults),
},

This gave us plenty of out of the box functionality so when the user says “Hey Danny, show me lamps”, we could easily grab the word “lamps” and add it to the React state. We could then feed this into the already existing search APIs on the dunelm.com website to return a list of products that match the search term.

Our sophisticated version of ‘training’ the AI model was to manually add in as many ways of saying the same things and copying and pasting the code into each callback (it is a hackathon after all!)

One technical hurdle we came across was the different ways the English language can be interpreted when it comes to numbers. The number one would sometimes come through as ‘1’, ‘one’ or ‘won’ and it gets even worse when you get to the number two (‘to’, ‘two’, ‘too’, ‘2’)! The solution? We decided to only use the numbers 3 and 5 when demoing - as the API was able to better match that.

Amazingly, the Speech API was able to understand all of our accents, from South African to Geordie. We also had a lot of fun trying to solve the problem of the API hearing words like“clothes” instead of “close”. Our solution? In places we used the word “close”, we changed it to “exit”.

You may be thinking that this goes against the spirit of an accessibility-focused project. Fortunately, the JavaScript ecosystem helped us out again, with the ‘words-to-numbers’ library providing us with the functionality to get it right (most of the time) when a user would request a number. This meant that we only needed a small amount of manual mapping to get it right so we ended up with something like this:

{
command: ‘view item *’,
callback: productIndex => {
let productNumber = Number(productIndex)
if (isNaN(productNumber)) {
const wordNum = wordsToNumbers(productIndex);
if (typeof wordNum === ‘number’) {
productNumber = wordNum;
} else if (productIndex === ‘to’ || productIndex === ‘too’) {
productNumber = 2;
} else if (productIndex === ‘for’) {
productNumber = 4;
}
}
const matchingProduct = products[productNumber — 1];
if (matchingProduct) {
setActiveProduct(matchingProduct);
}
},
},

Learnings & What’s Next?

Perhaps the biggest learning point for us was how much we could achieve in such a short timeframe. Once we had our initial MVP, we had to ensure all work being completed aligned to this scope. However, halfway through we needed to reduce the MVP to ensure we could deliver a working demo on time.

These are just a couple of the ambitious hopes we have for “Danny”:

  • To make this production-ready, it would be great to explore real NLP solutions to handle far more messages the user could throw at it. Then storing the data and continuously training it as more people use it. We would also bring in a text to speech API so the app could “talk” back to the user without the need for screen-reading technology.
  • It would be great to explore the possibility of adding this experience to other devices (desktop, tablets, wearable tech) and browsers.

When it came time to present to the judges and the rest of the company, we had a few test runs and needless to say, with just minutes to go before our turn to present, we were experiencing some “complications”. However, with a bit of practice, we got it all working and these are just two of the Slack messages worth sharing:

Special shoutout to the rest of the Successability team: Antonio Guerro, Clare Coates, Drew Tavernier, Jay Gohel, Lauren Edge, Lisa McCormack and Tom Hames.

--

--