Make or buy? We’d hoped for an off-the-shelf training solution, but the results were underwhelming. So we created our own. This is the story of that process, and of what we learned along the way
What, exactly, do we do? That’s the first question a business needs to ask itself when it’s fathoming out training requirements. Normally, you get a simple enough answer. But for idalab?
We are problem-solvers for hire, but the nature of those problems … varies quite a bit. When we’re helping our biopharma or medical device clients get more value for patients from data and AI, the crux can be anywhere. Is the problem hidden deep in the data? Perhaps it’s a legal issue relating to the data warehousing architecture? Or would a Bayesian model solve it? Just as frequently, it’s a skeptical stakeholder standing in the way of success.
It’s a wide mix of skills – spanning tech, AI methodology, biology and industry know-how – that we need to bring to the table. And then there’s the fuzzy-seeming, ever-elusive set of “consulting skills”. People will tell you these “come with experience” or “you’ll learn by doing”. You can appreciate why plenty give up on the idea of being able to train these skills there and then. But we didn’t want to accept defeat. Because, as a company, we knew we could only grow together.
In this article we want to share how we, as a tiny company, developed a portfolio of 21 custom training modules within a year – all without taking our eye off challenging client projects.
More than a methodology refresher
Hire clever maths, physics and computer science people – who perhaps even specialised in artificial intelligence (AI) or machine learning (ML) – and you’d think there’d be no need to train them in methodology. What could we add to those ML classes or Coursera courses taught by world-leading academics?
Quite a lot, as it turns out. We’ve found that machine learning classes – even the “practical” ones – tend to focus too narrowly on the ins and outs of certain algorithms and fail to take a broader, more application-oriented view. Take performance metrics, for example. Anyone who’s done an introductory ML course will have heard about accuracy and AUC, and probably precision/recall – maybe the F1-score too. No big deal. Yet the fact that they are all “easy to compute” belies the many subtleties of their application.
This is where it all started, with a humble module M01: “Performance Analysis for Classification Tasks (M01)”. When should you use pAUC? What about micro- vs macro-averaging? When can you trust your precision? How can you properly evaluate systems with a reject option? Isotonic calibration, anyone? All these questions before we even begin to think about multi-label settings …
We’ve found that a good format for methodology training is to set trainees a selection of almost essay-type questions, combined with practical tasks on a dataset. That follows the introductory lecture, to speed up self-study, held at our summer and winter retreats. To pass the module, each trainee presents their findings to a reviewer, as a mini-lecture.
Having invested a lot of time into our methodology training programme, we were hoping for a swifter solution in other areas. Communication skills felt like a likely candidate for an off-the-shelf solution. We didn’t feel like we were asking for much – just some fairly standard guidance on how to write direct and easily digestible emails to busy clients, run thought-provoking workshops that produced useful feedback, and hold engaging presentations.
Surely an area so many companies struggle with would be well served with adroit training solutions? Certainly there was no shortage of providers, but it was difficult to tell if they would truly deliver. In the end, there was nothing for it but to take the plunge and hire one. Then we tried another. And another.
But we were always disappointed. After each underwhelming training session, the confusion and disappointment felt by the team was palpable. Belittled by banalities and confused by trainers contradicting many of the company’s best practices and habits they had so far picked up, people’s enthusiasm for the very concept of training ebbed away.
We came away from the experience with one overriding impression: that most providers merely target the box-ticking needs of corporate HR departments. As a small company that already invested time and energy in mentoring and establishing best practice, there didn’t seem to be much they could help us with. (Although we’re still open to the possibility that more suitable providers may exist – and if that’s you, you know where we are!)
Start small, iterate a lot
Forced to contemplate developing a comprehensive training programme from scratch, we were initially a little downhearted. Where would we – a tiny company, with 80% of our working day eaten up by demanding client projects – find the time?
After some overthinking, we decided to just start as small as possible: put something together in less than a day, then improve it based on feedback. There would, we supposed, be some initial embarrassment (on the side of the trainer, mostly), but as long as everybody bought into the iterative approach, even that could be fun.
And what really surprised us is how quickly you build momentum and make progress. The satisfaction of shaping and building something unique spreads throughout the team. The training we developed to help people deal with difficult questions during presentations – a pain point identified by many members of staff – is a case in point.
To get the ball rolling, we selected a few slides from a recent project presentation that had sparked a few uncomfortable questions. Two of us (the designated trainers) then compiled a list of 10 or so critical ways these slides might be interrogated, plus strong answers to go with them. All in all, this took us less than three hours.
In the training session, one lucky person got the job of presenting the slides – and then being bombarded with these questions. It wasn’t quite as stressful as it sounds, because we made sure to keep the interrogation friendly! The whole group then discussed the presenter’s answers, coming up with alternative strategies and sharing relevant war stories.
We decided feedback should be collected anonymously, via a written survey, and then, a few days after the event, trainers could have a 45-minute “post-processing” session to go through comments. They decide then and there if and how the module should be refined.
That’s how C303 Handling Questions has evolved into a comprehensive training, complete with theory and best practices.
Given our size, the level of investment we put into this area was – and continues to be – pretty serious, with one person spending a third of her time on training. We have only come this far by making staff training and development a top priority.
The true value of training
So what have we learned on this training journey of ours? Most importantly, perhaps, that training has a much bigger impact than just improving people’s knowledge and skills.
Because all training takes place in-house – and nobody is an expert in everything – everyone gets to be both teacher and student. That means the tech savvy teach graph databases today to those who might tomorrow teach them project design. It all creates a positive, “safe” environment, in which being “bad” at something is totally fine. Far better for your culture to develop this organically than by defining values via buzzwords.
On top of this, we’ve surprised ourselves with how many things actually are teachable. You could say we learned by doing after all.