Episode 4: The future of business with AI with Bertie Vidgen, Rewire

In our fourth episode of Survey Booker Sessions we speak with Bertie Vidgen from Rewire, building socially responsible AI for online safety.

Artificial Intelligence (AI) is a hot topic and being used in a growing number of ways.

We discuss:
✅ How Rewire are using AI is being used to tackle online safety
✅ The move towards moderate and deriving information from customer feedback
✅ Will AI replace or support people and customer service teams
✅ Understanding your value proposition and when to pivot
✅ Explaining new products without getting stuck with jargon
✅ Understanding when is right to use AI or automation
✅ How to attract talent with a high competition for candidates

View on Zencastr

Transcript

SPEAKERS

Matt Nally, Bertie Vidgen

Matt Nally  00:33

On today’s episode, we have Bertie Vidgen, who is the co-founder and CEO at Rewire. So, thanks for coming on today, buddy.

Bertie Vidgen  00:54

Yeah, thanks very much for having me. Really glad to be here.

Matt Nally  00:57

So, what’s your role in Rewire? What does Rewire do? 

Bertie Vidgen  01:00

I’m the CEO and co-founder of Rewire, a company that I founded with the CTO, Paul Rocca. We met at the Oxford at the Alan Turing Institute doing some research into the use of AI to stop hate speech and different forms of content that are illegal, harmful, or that violate a platform’s terms and conditions. So that’s what Rewire does: we build AI systems that help to make the process of content moderation more efficient and effective, both so that we can save money because, at the moment, a lot of platforms are using humans to do this work, which is not a great way of solving the problem. Humans are very slow, they’re very expensive, and they don’t scale. But they also suffer really serious harm. And so we don’t just see this as something that is driven by money. It is also driven by the social impact and the challenges that people face. We also truly believe that AI can open new possibilities for how people are kept safe online. There are things you can do with AI that you cannot do if you’re just sort of manually looking for content that might be violating your terms and conditions. So it’s quite an exciting space to work in. And of course, we see a lot of this in the news at the moment with everything that’s happened to Twitter and Facebook, and it’s a very hot area to be in.

Matt Nally  02:15

There’s a massive area. And I think it’s just some of the bits that will come on later in terms of the areas you’re looking at with AI around customer service. But um, just take a step back a bit, what is AI, or artificial intelligence? And how is it different from, let’s say, machine learning or just general computer software?

Bertie Vidgen  02:34

This is one of those really never-ending debates. So, my background was as a researcher, and so people endlessly debate whether it’s AI, whether it’s machine learning, or whether it’s just statistics, but it looks a little bit fancy. And I think, broadly, we tried to think of it as just technology that’s automating processes. Now, the extent to which that is truly mimicking, reflecting, or replacing human intelligence, I think there’s a very open-ended question. Obviously, as we’ve all seen recently, open AI as chat GPT, which has just been absolutely amazing and has really started to be truly meeting human levels of writing, which we’ve never had before. We’ve never had generative AI that could do that. So, it’s been a huge breakthrough. But you know, does that mean that AI is actually thinking for itself? Now it’s still not thinking for itself; it’s still a deterministic computer programme. So, we’re using AI in that sense of anything which can automate the process of content moderation.

Matt Nally  03:31

That’s what we’ve been looking at: AI. We’re at the stage of doom and gloom, where AI will take over jobs, or actually, is it still there just to support, and it’s just a computer-sort of programme doing things for you. I know you deal with AI, mainly in communities but more specifically within the confines of social media. But are there other types of communities where AI fits in and works for you as a solution you can help with?

Bertie Vidgen  04:03

The way that we really see the market is that anywhere that you have people sharing content and interacting with each other, then you always have a risk of harm being created. And sometimes that risk is incredibly low. Like, there are platforms that we’ve seen which are sort of internal message boards. And yes, people are sending messages to each other or like in a Slack workspace, the risk of serious harm is low; it’s never zero, but it really does start to become quite small. But there are other spaces where it’s incredibly high. And say you have a chat service which is aimed at children or a live audio feed when people are gaming. And so a lot of that is children, but not exclusively, and that really can create quite a serious risk of harm. And in all those spaces, I think AI can come in and help because it’s just a question of scale. You want to make this really simple because you have so much content being created in real time. There is no feasible way to go through and check that, and then get all the alternatives that people have proposed just become absolutely infeasible, like you want to have a moderator check every video on YouTube before it gets posted. That’s not going to work; people are going to stop using YouTube straightaway. And so something that we think a lot about is proportionality. And how do you make sure that whilst you’re trying to protect from harm and build AI that can improve that process, you don’t go too far the other way? And you don’t start ruining user experiences shutting down free speech, kind of adding friction, where you just don’t need to.

Matt Nally  05:29

So, one of the things you mentioned was AI moderating chat. So, is there something coming where AI will help with moderating customer service? So, for example, before you open an email, you’ll have an idea of the sentiment of what’s in that email and what type of content is there. So, is it a positive email that you don’t need to prepare as much for? Is it a highly negative email that you might want to mentally prepare for, rather than sort of opening it and just being a bit taken aback and shocked? Is that where we’re going in terms of moderation there? 

Bertie Vidgen  06:01

It’s sort of fascinating because we did some work recently with Deutsche Bahn, which is the national railway operator in Germany. And they have this problem that they get a lot of real-time customer feedback. And that’s really helpful. So that’s customers, either; they have a sort of like an icon, which you can scan on your phone when you’re on the train. And you straightaway give them feedback; they have forms, they have Twitter, and they have other social media profiles. And so it’s really easy to get in touch with them and say, Look, this is how I think about your services. The reality is that most people spontaneously give customer feedback when they’re either incredibly happy and something has gone absolutely amazingly well, which is quite unlikely with trains because a train doing an amazing job basically means that you don’t really think about it. And so it’s very rare until you know you’re on time; that was fantastic. That doesn’t happen. Or when they’re really annoyed. And when they’re like, “Guess what? Your train was late”, and I missed my next connection or something really important, or just the service was horrible. So they get a lot of feedback that I can’t put a number on, but they get quite a lot of toxic, angry people who are racing into them. And the challenge is, how do you separate that out so that your staff don’t have to look at this stuff when they’re not really ready for it, just quickly looking at the feedback, you see all this horrible, and sometimes it’s incredibly explicit, sexually explicit, almost violent content; it’s not really very nice to look at. But how do you also make sure that you don’t just throw all that feedback away because it’s so horrible? There’s lots of useful information in there, lots of helpful signals about more substantive criticism and useful feedback on where you could actually improve the service in a meaningful way. And so we want to help them be able to capture that. So we’ve been doing some work with them to help filter out that sort of harmful content, that toxic feedback and help them to make sense of it so that they can actually derive some value from the more useful critical feedback

Matt Nally  07:55

As far as the longer term, is it something you see being used by SMEs? Or will it be more exclusively used by the larger firms? You just have massive volumes of inquiries. 

Bertie Vidgen  08:05

This is a big question that we ask ourselves all the time: where is the market for this? Is the market for this with the bigger firms because until you get to a sort of scale that’s more than hard for one person to go through and check, then however unenjoyable that work is, most firms just won’t put the money into AI; they’ll just put the money into hiring a member of staff, which they can sort of handle, at some point, it becomes too much, it just becomes way too hard. So, the big social media platforms are always going to have to use some AI. And even if you’re like a moderate-sized company and you have a lot of customer feedback and a lot of inbound messages on your socials, you’re probably going to need to use some AI to process them. I’m not sure where the bottom of that market sits or whether smaller organizations really want to use it. Because there’s also a reality that AI is relatively expensive. To build bad AI and to get something off the shelf that’s like complete rubbish, cheap, and easy. And there are 1,000 providers out there building pretty shoddy AI products. If you want something that’s good, that works as you want it to, that can be live updated, that understands nuance and context, and actually starts to match that human level of reasoning, you need to put some more serious money into it. And even though opening eyes, chat GPT is amazing. By itself, you can’t just point it at your customer feedback and say, “Right, tell me about this customer feedback and explain it all to me.” It’s still not quite there; you still need someone to come in who actually knows about the setting and knows about the problem and gets them to help.

Matt Nally  09:36

Yeah, I have been interested in how the market does evolve and where it eventually bottoms out. One of the questions I had that I think relates to surveying is that surveyors have to explain different services to different people at different times, and people have got different motives. So, for example, you might have a first-time move and upsize or downsize. But with everyone, there is jargon within the process. And there’s the challenge to keep things simple and straightforward. When you’re trying to market something like Rewire, an AI product, it’s very new. There are lots of new industry terms and all that kind of stuff. Is there a challenge in getting your message across to potential customers and cutting through that sort of new language?

Bertie Vidgen  10:25

It’s really challenging, and I have to say that we always have that problem of how to simplify things, which for us are kind of bread and butter, so we’ll throw around terms like the precision recall of a model. We’ll talk about gold standard labelling, the annotation process, and inter-annotator agreement. And our team, which we think about as second nature to us because we’ve worked on it for a few years and do it every day. But for a lot of the customers that we deal with, who tend to be non-technical, I will say that as well. That’s kind of been a learning point for us, that a lot of our customers are not deep technical experts and have at best a working familiarity with some of these things. You have to take them through what it means. And I think it’s always the so what. We’re talking about, say, precision, which is a metric to assess an AI model. So basically, it tells you of all the things that are AI-flagged as being hateful or toxic. How many of them actually exist, and it’s a subtly different metric to accuracy or recall. So, you always have to be a little bit careful with how people interpret it. But we want to say what we want to say in this matter, because if our precision is low, that means that lots of the things we’re telling you are hateful or toxic. They’re not. You can’t really trust the model, and you can’t trust the model results. And I think that’s always been really helpful to us. And actually, it often helps us to not explain things; we don’t need to because we started going right? Well, we could go into all of this, like your f1 score is this, but what you know is that the customer is not going to care about it, like we do, because it matters to us for sort of deep technical reasons. For them, they don’t need to know this stuff; they need to know the sort of things they need to, which is obviously a truism. But it’s something we do try to think about quite a lot in the more technical areas.

Matt Nally  12:12

So, do you try avoiding jargon words entirely? Is it best to just use plain English? In all aspects? Or is there a benefit to using technical jargon sometimes to show knowledge but then going in and explaining it in simpler terms? And so when you’re focusing on things in plain English, you’re looking at the what and the why we were talking about earlier?

Bertie Vidgen  12:37

It’s trying to find a way of communicating it without talking down to someone and saying, “Maybe you guys already know this; maybe I’m telling you things you already know.” But still, I just want to be really clear when I say this. This is the meaning of it. And I think that’s been quite helpful. But obviously, if you do that too much, it starts to feel like a lecture or a seminar or something. That’s not very good because of client code. But sometimes we do that, just to make sure everyone’s on the same page. I think we tried to sort of say, “Look, here’s the result. Here’s the thing you need to think about.” Now, if you want to go five levels deeper and really push us on every single assumption and all the details, we have that; we have actually done that thinking. We have it in the appendix to the deck, and we have it in this, because we actually wrote a lot of open-source, open-access research papers, which we publish. It’s all on the paper, like it’s all there; we’ll even try and share some of the data. So, if you really wanted to recreate this, if you wanted to recreate our work, and what’s the point of getting us dirt in the first place, maybe. But sometimes people do. They just want to know they can recreate it. We say, “Yeah, look, absolutely, it’s all there.” And just saying that you have that and being willing to be transparent and open instills so much confidence that it is very rare that people really want to push us all the way down to the bottom, though I have to say that we actually like it when they do because we’re like a bit nerdy. And this is kind of why we got into the game. That’s because we enjoy those difficult, challenging problems.

Matt Nally  14:06

So that ties into another question I had. So, every business has a core service and core strengths, offerings, and there are times when you might try a new service or a different angle on a particular service. You made a very clever move, moving from primarily being online and social harm-based in terms of the AI products to repurposing them in some respects and focusing on how you can help with the customer service angle. How would you go about stopping and thinking, you know, when it’s right to try and pivot and move on and try something else? And what I suppose commercially makes more sense. What’s your process now?

Bertie Vidgen  14:49

This is something we think about a lot, I’ve got to say, and I definitely do not have the answer to it. But it’s a very live conversation. It is basically corporate strategy, and that feeds very closely because we’re such a small organization into product strategy as well. But thinking about it, what is the real pain point that we can solve? And initially we thought, we’ll just build AI people, people doing this as humans, then just build AI to replace the humans. That’s just like a classic kind of business digital disruption: you have this slow, boring manual process; let’s throw in some technology and see if we can fix it. We then realized that the way the market works, and the way people’s workflows are organized meant that fitting in that AI wasn’t actually possible for most companies out there. Most organizations don’t have the scale, don’t have the workflow, don’t have a budget, and really don’t have the maturity in thinking about their trust and safety to start just chucking in AI. In many cases, what they really want is an orchestration tool—a tool that will help them manage that workflow, which we weren’t building. So, then we thought, “Who doesn’t need AI, while the big platforms do?” But actually, they build a lot of this internally, and they have fantastic researchers. And we know a lot of those guys, and we have done work with Facebook and Google, and we have kind of done some projects with them. But we know that’s not a huge long-term opportunity because they’ve got amazing people; they don’t need to come to us to do that for them. So, we started to just think much more carefully about who is actually suffering from this, for whom is it a real pain point, and who has money to spend, and where can we build something that they can’t easily replicate because it would cost them too much to build it themselves. And then that took us down to customer service; it took us to that space where every business wants to understand their customers better. Every business has this problem of receiving toxic, undesirable feedback. If we can help them filter that and make sense of it, we have a big marketplace that we can chase after and go for. So, we still see ourselves as an online trust and safety company. But we’re just being a bit more flexible. We won’t be able to get it right until we have the product-market fit. We’ve got a lot of companies that really want to buy this product. And if they didn’t have our products, they would be absolutely upset. We just need to keep pivoting and exploring things. I’ve just been very honest with ourselves and very critical about where we actually add value. And perhaps, looking back a year, we didn’t do it enough. And now we really are.

Matt Nally  17:15

That’s a good point. What’s enough, and what’s too much? Do you find there’s a sensitive balance in terms of how often you review things because you don’t want to overreact and stop doing something before it’s had a chance to bed in?

Bertie Vidgen  17:29

If you just pivot all the time, you’ve never really explored an option. You’ve got a couple of data points, but just the first few people are coming back and saying no, they’re not interested. And you’ve given up, and that’s part of being a successful startup. It has been quite tenacious, saying, Okay, but why? Is there a reason you can address, or maybe it’s because you still haven’t quite found your right niche or segment, we didn’t speak to the right person, or there are so many factors that go into that? And I always remember the Y Combinator story of Stripe, where Stripe did these payments process very early, and they were very effective at getting other Y Combinator startups to sign up because they didn’t just say, “Look, here’s a link; go sign up; go give it a go.” They actually went over to people in these sorts of offices and said, “Right, you want to use it? I’m going to go get a laptop for you, and I will set you up and get you using this.” And that slightly aggressive strategy has to be used for any startup. And if you pivot too often, you just can’t pursue that; you’re moving around too much. So, it’s very hard. And everything you do has an opportunity cost. By spending a couple of days exploring one option, that is a couple of days that you’ve lost, and you’ve got to really think about that carefully.

Matt Nally  18:37

The flip side is that if you don’t take that time out, then you’ve got the opportunity cost of missing out on something and potentially not realizing there are avenues you could go down and make more money from. So, from a customer service perspective—and we’ve been touching on that a little bit through this in terms of AI—do you think we’re going to get to a point soon where AI will replace the customer service function? Because it’s good enough to analyze the sentiment of what’s coming in. And like some of these things, they can write responses to those problems? Or is it something that will really be there to help analyze and look at sentiment and provide overall analysis, but the support will still come? And the value will still come from people within the process?

Bertie Vidgen  19:22

Who knows where things are going, because there are some amazingly powerful AI tools coming out, and I think some of the stuff we’ve built is already capable of making human-level decisions in some contexts, and with more money and more funding, we can really start to push into that a lot more. The reality is that most AI, and I think most responsible AI, will be about augmenting and supporting human activities, not replacing them, because it’s just too much to expect AI to do, and you should always have governance. If you’re automating every part of your process, where’s your human touch point? Where’s your human just checking that this makes sense and that you’re not doing something catastrophic? We deal with a fairly, I’d say high-profile, contentious issue of online safety. People express hateful language or abuse or terrorism in some extreme cases, but also making sure that we protect free speech and that we don’t invade people’s privacy through the AI that we’re training. So, it’s very important that we make that judgement call correctly. I’d never want to see a system, given the performance of AI now, where you don’t have a human coming in and checking and saying, “Is this sensible? Are we making the right choices?” And I think AI wouldn’t use it; it’s best used to free up your human expertise to do the things humans can do which AI can’t; you can’t build a relationship with a customer, you can’t understand their problem in depth with AI; that’s just not what it’s for, therefore, you want humans and people to do it. So those jobs don’t seem to be going away, and hopefully with more AI, we can actually free people up to really do that more.

Matt Nally  20:55

So, in terms of supporting people at the moment, the use of it is reactive. So, it’s reading what’s coming in from customers and looking at sentiment and so on. But will it ever move to being a proactive system where it analyses what you’re about to send out? And suggesting where something might get misconstrued, for example, and therefore get an undesired response from the customer because they’ve taken something in a way that wasn’t meant.

Bertie Vidgen  21:24

People try to make these tools, which become a sort of popular area, and it’s almost like giving nudges, like someone who’s just about to hit send on a message or a tweet kind of going, “Oh, are you sure you want to say that? Is that a nice thing to do?” There was an app from the BBC that did this; it’s called the Onus app, and you could download it on your phone; it was aimed at children, and it was like a keyboard overlay that gave little nudges and said, “Well, do you really want to say that? Is that a nice thing to say?” The problem with it is that it makes a lot of mistakes, so it’s not always going to get it right, and that can annoy people. And do we want those nudges? Are nudges a good way of solving this problem? And then an appropriate way of solving this problem is not a bit of a nanny state in certain cases, especially for adults, but I think Grammarly tools for hate speech will be coming in. Of course, the one big challenge is that the people who tend to be most concerned about hate speech, who really want to solve that problem, and who would think using Grammarly tools for hate speech is a good idea are often not the people who would be spreading hate speech in the first place. And the people who are actually going to be spreading hate speech really would never want to use that tool. So, getting people to use this tooling can be a bit of a challenge.

Matt Nally  22:41

So, what, in terms of AI, is available to SMEs right now? And there are things like chat GPT that can help automate blog post content creation and stuff like that. And I’ve mentioned that previously on other podcasts. But what tools are there at the moment that companies can use to automate processes and speed things up? But keep that personal touch? 

Bertie Vidgen  23:08

The best thing to do is to be problem-driven, which means to ask, “What’s the problem I’m trying to solve? What’s the constraint or challenge I face, or where I am being overrun with too much, say, inbound content or something like that?” And being really clear about what you’re trying to do? Because I certainly have seen a lot of organizations just say, “How can we embed AI in our process?” And that’s not a great way of thinking about it. That’s been very solution-driven, looking around for nails that you can hammer into, and it’s not a great way of doing it. It’s much better to start with the problem, be very clear about that, and then see if AI is the right answer. And in many cases, it’s just not possible unless you’ve reached a certain scale or size. Once you reach that scale and size, it becomes really important. And AI is like an amazing cost-saving tool. The next thing to think about is what skills you have. It’s good to be very open with yourself. If you’re not an AI expert, if you’re not that interested in it, can you outsource it? Can you bring someone in who could do that for you? There are plenty of companies that outsource AI development. Or can you use them for totally no-code solutions? This has been quite an exciting area within computer science for the last couple of years: recognizing that just writing code is a huge barrier to most people using any kind of technical system, whether that’s like advanced AI, which is generating human-like text. If it’s just a really simple predictive model, that kind of tells you which customers you should prioritize this month. You could use a very simple AI model to help tell you who you should be reaching out to for any of those implementations you can now do without any coding experience. There are lots of cool tools out there, and they just completely remove that barrier. So, I would definitely be looking at that because the true high end of AI lets in the amazing, fancy, latest state-of-the-art stuff. You can get very good stuff that is very close to that but has none of the kinds of technical barriers; you won’t have the absolute best AI in the world, but you might be 95% of the way there. And you only had to spend a little bit of money and didn’t have to code anything, which is like a pretty good kind of trade-off.

Matt Nally  25:13

So, moving on from the AI perspective, more generally, as a CEO and co-founder, you found your market fit is starting to grow. How’d you really start to scale? whilst maintaining the quality of the product, customer service, and so on? How did you find the right people to bring them in? And what are the keys to success for that side of things?

Bertie Vidgen  25:38

The first question kind of goes back to what we talked about earlier, which was product and corporate strategy. And just being really honest, what is your strategy? What is your hypothesis for the market? And that term hypothesis can be overused a little bit, but it is really helpful because then you start to think about, “What evidence do I need to support this hypothesis?” Our hypothesis was that people wanted to automate their process of content moderation by using AI to replace humans, but we discovered that wasn’t quite right. And as we dug into it, it became a little bit more complicated. So, again, if we think about when you should pivot and when you shouldn’t pivot, if you have a very clear hypothesis that this segment really wants my product, what evidence do you need and decide that in advance before you go start doing it, so maybe my goal is to speak to 500 potential customers. If none of them expresses any interest, I need to move on to the next thing because there’s no demand. So I think that’s been a huge thing, like understanding the value proposition and testing hypotheses in a very structured way. The second thing is team; you can’t be a good team; it is just the most important thing in the world. Sometimes, great people come in, and you don’t quite know what the role would be for them. But you know, they’re great. And so you can find a way of bringing them in; you have other people where you need somebody technical, because it’s a technical role. And they do have to understand what their exports do. But that doesn’t mean they’re great long-term team member if they don’t totally get your ethos and what you’re trying to do. So, we do sometimes bring people in on short-term contracts because they have the right technical skill set, but they’re not necessarily totally on the team’s vibe or fit, and just being very honest about that is important. And it’s fine. There’s nothing wrong with saying to someone, “Look, you’re great; we really liked working with you. However, we don’t see you as our number three employee”, because that’s a super important position to us. And then finally, it’s just flexibility. You’ve got to be flexible. We’re a bit too fixed. When we got started, we’d been in the space for sort of six or seven years, working on online safety as researchers and advisors. And maybe we weren’t responding quickly enough to the market signals in the feedback. And now, we’re starting to do that. It’s been incredibly helpful. So, that’s been really key to us. But we’re still trying to work it out. We’ve never totally nailed it.

Matt Nally  27:54

That size of flexibility is interesting because you have to be able to adapt to market conditions and also what feedback you’re hearing in terms of, as a common interest, what do people want from the solution? But equally, you have to build enough of your core product and really solidify that before you can start getting distracted with other things, because otherwise, the core offering just isn’t there. Another aspect I wanted to talk to you about was recruitment. Because one of the difficulties in surveying has been a shortage of surveyors. There are lots of surveying firms that have been very busy and need to attract surveyors into the business. Equally, there aren’t that many surveyors to recruit. So, I imagine in AI, it’s the same thing. There’s a skills shortage where you’ve got people that could go into AI, they could go into web development, they could go into cyber security. What are the keys that you found for overcoming that shortage and attracting talent into the business? 

Bertie Vidgen  28:55

The UK, nationally, has a massive shortage of AI and data science skill sets. And we know that the UK is not training enough people to meet current demand, never mind future demand. So that means that people can charge a real premium. Now they come out of university, even as undergrads, and they can charge really serious salaries, especially if they’re London-based. So it’s hard. I think we’ve been really fortunate that we have a great network of PhDs and researchers, as well as people from Oxford and Cambridge, with whom we’ve worked for a long time and who are happy to come join us. Actually, Queen Mary has probably been the university that, for some reason, most people have come through, but it is difficult. And I’d say that once you find someone who’s really good, it’s okay to be open with them. And especially as a startup or a small business, just saying, Look, we don’t have a tonne of money to pay you, but there will be other benefits that come through equity is obviously a great way to incentivize people. And for us, we have a clear social mission. So that does appeal to people. They understand that we are trying to have a positive impact. We are a for-profit organization, but still, there’s a very clear social mission, and the work is really exciting. We get to work on cutting-edge problems with some great clients, especially for junior people who are trying to develop skills. That makes a big difference. Not everyone is just trying to maximize money. They value these other aspects too. But it’s hard. It also just takes up time. It’s just a huge time drain trying to find people.

Matt Nally  30:19

It’s funny you say that. I was reading an article the other day on LinkedIn that basically said employee engagement and longevity with the company go beyond salary. And the purpose of subscription is much more about value alignment, feeling rewarded by a team, training, and stuff like that. So, I think there’s probably an opportunity there, or there is an opportunity there for firms that know their value proposition, why they exist, what they’re offering to customers, and why. And if you know that, then you can probably better attract people and not have to worry about winning purely on a budget level. And engage people that way.

Bertie Vidgen  31:01

Money is important. You’ve got to be realistic; people do want to earn a decent wage, and they’re never going to take a huge cut to come to you. But yeah, I think we’ve been kind of fortunate; we’ve definitely had some great people come into the company. It’s really fantastic. When you find people who just totally get it, like they get the mission, they get our style of working. And it’s literally just like having someone who can take work away from you and come up with things you would never think of. And that’s where you’re sort of like a really sweet spot. But it’s definitely a big concern. I’m not quite sure what the right term is. But it’s definitely something we think about a lot.

Matt Nally  31:39

Thanks for coming on today, Bertie. And I might just put you on the spot before we go and ask, what are your top three tips for success and growth?

Bertie Vidgen  31:49

That’s put me on the spot. No one thought about money early on, or are you raising money? Are you taking out debt or bootstrapping and just really getting into that straightaway. And the longer you wait to raise, the harder it gets. You want to have that network rolling; you want to have those connections early on. Investors give amazing advice. And the best advice we’ve had is from investors asking us tough questions and making us rethink what we did. So, I think that’s just a big thing. Number two is to be relentlessly focused on your customer; I think there’s that sort of saying about 1, 10, or 100,000: get your one customer, get your customer, who if you shut down the next day would be really upset because they absolutely love what you’re doing and you become essential to them, and then get your 10, and then get your 100. And these are roughly as hard as each other, so just getting that first customer, not easy. So, that’s the next thing. And then I did a corporate strategy and a product strategy. I’ve been saying the whole way through, be very honest with yourself. And then be honest with your investors and your stakeholders, and develop a sort of nuanced, sophisticated position on the market that explains why you’re the right people and what you’re building, even if it’s not the fanciest thing in the world, is the right thing to be building and it’s what people will actually pay for.

Matt Nally  33:13

Awesome. I think they’re good points. I’d agree with those. Thanks again for coming on today. And I look forward to catching up soon.

Bertie Vidgen  33:20

Thank you for having me. It’s been a real pleasure.

Scroll to Top