I’ve done a lot of hackathons, and a big part of the experience I love – is being able to talk to great mentors and receive feedback from judges. From that, I tried to give back over the years by judging and mentoring at hackathons – but it doesn’t scale and doesn’t deliver the most consistent impact (e.g. at some hacks – many hackers seek my mentorship, at others – few).
A couple years ago, I reached out to Brian – one of the organisers for MIT RealityHacks (a magical experience) – expressing interest in helping them run the hackathon. They had previously reached out to use a “team progress visualisation tool” I had built at their previous iteration, but it was too late for me to get it ready in time. They did try a hardware version of that concept (which unfortunately was too much to manage at the hack).
I was quite keen on both supporting their hackathon and magnifying the impact through software. However, I was asked if I could help with graphic design – which is again, not scalable. I wanted to build something that could fundamentally support a small aspect of running a hackathon.
Particularly in Australia, I wanted to help the small hackathon ecosystem here. If we make it easier to run hackathons, would we have better quality and quantity of hackathons?
And so,
the 1st version - a splash of cold water
Today for a Discord bot product I've been building – I had my first user, first customer, and first actual use-case.
- a user used my Discord bot
- the customer installed my Discord bot on their server
- it was for a hackathon they were running across multiple countries both in-person and remotely
Unfortunately, the large majority of users did not use my product
😮 Big yikes. They instead, used the typical way as if my product wasn’t there
Fortunately, observing the usage (and the lack-of) gave me insights on what features to prioritise building next.
Which are more useful in making something people want, than the many hypothetically useful features I had generating in my mind. These are now moved to backlog.
Observing how users used (and more importantly, didn’t use) my product gave me more truthful insight than my product-sense/intuition/personal experience could self-generate in this case.
My product-sense generates hypothesis, and I was essentially taking it as truth and building based on it. Which is fine to an extent (the MVP), and this extent is more minimal than I expected.
Don’t get me wrong, YC has drilled “launch early” so many times into my brain (especially from all their podcasts I heard) – but truly internalising it is a different story.
Despite the lackluster usage, the organisers saw it’s potential usefulness and wanted to continue using it going forward in future :)
the 2nd version - warm water
A few days after that cold splash of what-users-are-actually-like, a hackathon organiser in Sydney was requesting for hackathon mentors. I offered my support, and also the services of my product.
This time, I knew exactly what was needed. I found out the hackathon was happening in a couple days, and quickly jumped on nailing those features.
Boom – version 2 was ready. Added a smoother user-flow, user education, clearer conveyance of information…etc.
A bunch of hackers used my product throughout the event, and that was nice. There was an obvious net positive!
I got deeper insights on what to features to build next, and also better understood the use cases for my product.
my story
Consider this:
tl;dr: it took about 2 days to make a workable MVP version 1, the middle were months on/off experimenting and building, and then 1.5 hours to get it ready and manually onboard a customer. In relation to actually productising it – the start and end were most impactful, the middle could have been voided.
So 2 days and a bit was all that was needed to launch.
- I started this project in Sep 2024, I took about 2 days of work to reach MVP
- paused it, did other projects
- revisited it in Jan 2025, after finally tasting the magic of integrated AI-assisted dev tools
- spent a few weeks rotating through Cursor, Windsurf, and Cline to ‘vibe code’
- realised the framework I was using was complex enough that I couldn’t prompt effectively without actually understanding the nuance and methods in the framework
- spent a week learning the framework
- got back to the codebase with objective to refactor
- de-prioritised the project for other goals
- realised I needed to 80/20 this, and more importantly – get a user (which would be a stronger forcing function to continue this)
- got a customer’s interest
- customer got back to me two days before the hackathon (fits my use-case) they were running
- I spent about 1.5 hours onboarding and getting the product ready
- launched to customer’s users
Why not launch early?
Reflecting, there were two key influences that led to me to not launch early:
Firstly, a Focus on Experimenting and Learning, over efficient Product Building:
- I first-used an integrated AI-assisted dev tool (Cursor) in Jan and this really empowered me.
- Besides exploring a few tools, I was trying to test how to apply things learnt in industry to a small project with AI-tooling.
- Hence, it was overall a more of a experimenting than a ‘make something people want’ focus
- However, there was an end goal to build and launch the product
Secondly, Product Hunt takeaways bleeding into my expectation of building a startup:
- Last year, I attended a Product Hunt workshop ran by Vincent Koc
- Product Hunt is essentially a voting platform for daily products launched across the world. Getting a high place helps garner early traction and visibility.
- He spoke on how on Product Hunt:
- for a greater chance at success: you have to launch your polished product and give your best-go because the standards are now really high
- it’s essentially a zero-sum game, for you to place X th, someone must place X-1 th
- for such a level of product completion before launch:
- low risk: since product quite complete
- high effort
- unknown returns: zero-sum game
- Now, I realise the difference between Product Hunt users and your actual users (ICP) who care
- launch something that kinda works
- low risk: since ideal users are desperate enough and small group anyway
- low effort
- high guaranteed return: you get immediate feedback and insights into the truth
- Product Hunt is a bit of the wild ocean – your ideal target users are likely to be a small minority if at all
- launch something that kinda works
- I do believe some of that “polish before launch” takeaway hindered the “launch early” approach I could logically understand, but hadn’t really experienced.
The Positives
Now, I know and can internalise
- 2 features/angles that would shift the boat into the right direction (and why competitors have some form or feature to answer the associated root problems)
- reduce user action to the minimum – while understand their intent to the maximum
- where to find the best data
- the other features I was building now go to back-log
- e.g. a customer console isn’t priority when I can manually onboard – do things that don’t scale early
- small experiment > then test > repeat
+Huge 𝐖ub ✨🌱 Despite the low usage of my product in the first launch, the customer saw the potential and is still keen to trying it out for future use cases. And this gives me a lot of drive to continue.
show, don’t tell
+follow-up 𝐖ub ✨🌱 The second launch was much more successful in user uptake :)