NHacker Next
login
▲Launch HN: Golpo (YC S25) – AI-generated explainer videosvideo.golpoai.com
105 points by skar01 23 hours ago | 86 comments
Loading comments...
torlok 8 hours ago [-]
Going by the example videos, this is nothing like I'd expect a whiteboard video to look like. It fills the slides in erratically, even text. No human does that. It's distracting more than anything. If a human teacher wants to show cause-and-effect, they'll draw the cause, then an arrow, then the effect to emphasize what they're saying. Your videos resemble printing more than drawing.
giorgioz 10 hours ago [-]
I love the concept but the implementation in the demo seem not good enough to me. I think the black and white demo is quite ugly... 1) Explainer videos are not in black and white 2) the images are not drawn live usually. 3) text being drawn on the go is just a fake animation. In reality most explainer videos show short meaningful sentence appearing all at once so the user has more time for reading me.

Keep up refining the generated demo! Best of luck

fxwin 9 hours ago [-]
I'm also not the biggest fan of the white-on-black style, but there is definitely precedent (at least in science-youtube-space) for explainer videos "drawn live" [1-4]

[1] https://www.youtube.com/@Aleph0

[2] https://www.youtube.com/@MinutePhysics

[3] https://www.youtube.com/@12tone

[4] https://www.youtube.com/@SimplilearnOfficial

typs 23 hours ago [-]
If that demo video is how it actually works, this is a pretty amazing technical feat. I’m definitely going to try this out.

Edit: I've used. It's amazing. I'm going to be using this a lot.

skar01 21 hours ago [-]
Thank you!!
trenchpilgrim 18 hours ago [-]
I threw the user docs for my open source project in there and it was... surprisingly not terrible!

Note: Your paywall for downloading the video is easily bypassed by Inspect Element :)

My main concern for you is that y'all will get Sherlocked by OpenAI/Anthropic/Google.

mkagenius 11 hours ago [-]
Not only the giants. They will face significant threat from open source too[1]. But they just need to carve their own user base and be profitable in that space.

1. For example, I have built http://gitpodcast.com which can be run for free. Can also be self hosted using free tier of gemini and azure speech.

delbronski 23 hours ago [-]
Wow, I was skeptical at first, but the result was pretty awesome!

Congrats! Cool product.

Feedback: I tried making a product explainer video for a tree planting rover I’m working on. The rover looked different in every scene. I can imagine this kind of consistency may be more difficult to get right. Maybe if I had uploaded a photo of how the rover looks it may have helped. In one scene the rover looks like an actual rover, in the other it looks like a humanoid robot.

But still, super impressed!

skar01 22 hours ago [-]
Thanks! We are working on the consistency.
dtran 19 hours ago [-]
Love this idea! The Whiteboard Gym explainer video seemed really text-heavy (although I did learn enough to guess that that's because text likely beat drawing/adding an image for these abstract concepts for the GRPO agent). I found Shraman's personal story video much more engaging! https://x.com/ShramanKar/status/1955404430943326239

Signed up and waiting on a video :)

Edit: here's a 58s explainer video for the concept of body doubling: https://video.golpoai.com/share/448557cc-cf06-4cad-9fb2-f56b...

addandsubtract 7 hours ago [-]
The body doubling concept is something I've noticed myself, but never knew there was a term for it. TIL :)
qwertytyyuu 2 hours ago [-]
The way text is appears is so weird, is like rendering a by plotting each letter aschronously. I wonder how it compares to auto generated power point presentations. I suspect it might be worse
albumen 21 hours ago [-]
Love it. The tone is just right. A couple of suggestions:

Have you tried a "filled line" approach, rather than "outlined" strokes? Might feel more like individual marker strokes.

I made a demo video on the free tier and it did a great job explaining acoustic delay lines in an accessible fashion, after feeding it a catalog PDF with an overview of the historical artefact and photography of an example unit. Unfortunately the service invented its own idea of what the artefact looked like. Could you offer a storyboard view and let users erase the incorrect parts and sketch their own shapes? Or split the drawing up into logical elements and the user could redraw them as needed, which would then be reused where that element is used in other frames?

skar01 21 hours ago [-]
Thank you!! We are actually currently working on the storyboarding feature!!
ing33k 3 hours ago [-]
it created this video for an app I am working on. https://video.golpoai.com/share/8de80271-1109-48e4-ac52-9265...
achempion 7 hours ago [-]
Where I can find what is a credit? It says 150 credits for a Growth plan but doesn't explain how many credits are needed for a single video

p.s. the pricing section is unreadable under the 840px width

meistertigran 18 hours ago [-]
Can you share the paper mentioned in the demo video?
mclau157 23 hours ago [-]
I have used AI in the past to learn a topic but by creating a GUI with input sliders and output that I can see how things change when I change parameters, this could work here where people can basically ask "what if x happens" and see the result which also makes them feel in control of the learning
skar01 21 hours ago [-]
Thank you!!
snowfield 6 hours ago [-]
I want to pay 20usd just to troll my friends with explainer videos on why they're shit at video games :D
ludicrousdispla 5 hours ago [-]
that seems like excellent product market fit as the AI generated explainer videos won't even need to be correct, and the more incorrect they are the better the troll
adi4213 23 hours ago [-]
This is neat but I wasn’t able to get it to work (server overloaded is what the browser app said) I’d also recommend registering a custom domain in Supabase so the Google SSO shows the golpo domain - which is a small, but professional-signaling affordance
skar01 21 hours ago [-]
We will soon! Wanted to get the model working first! Could you try again
ceroxylon 22 hours ago [-]
The generated graphic in the linked demo for "Training materials that captivate" is a sketch of someone looking forlorn while holding a piece of paper. Is there a way to do in-line edits to the generated result to polish out things like this?
skar01 21 hours ago [-]
We are working on that. There will ultimately be a storyboard feature where you can edit frame by frame!
tk90 22 hours ago [-]
Pretty cool, especially the voice and background music - feels just right.

I asked it about pointers in Rust. The transcript and images were great, very approachable!

"Do not let your computer sleep" -> is this using GPU on my machine or something?

skar01 21 hours ago [-]
No! We just had that because we had not built the library feature yet, and just forgot to remove it. Now you can access through there!!
drawnwren 22 hours ago [-]
I'm sure someone else has mentioned this but your video on the main page correctly has GRPO the first time it's introduced but then every time you mention it after that -- you've swapped it to GPRO.
Lienetic 22 hours ago [-]
This is really interesting, definitely going to give it a try! Seems fun but are you seeing people actually needing to make lots of videos like this? What's your vision - how does this become really big?
reactordev 23 hours ago [-]
This is actually pretty amazing. Not only does it work, it’s good. At least from the demo videos. YMMV.

What I always wanted to do was to teach what I know but I lack the time commitment to get it out. This might be a way…

skar01 23 hours ago [-]
Thank you so much!
sdotdev 15 hours ago [-]
I'll try the 1 free generation soon, but the way the text appears randomly in that landing page demo video is really weird. I keep loosing track of where I'm reading too as the audio sometimes is not perfectly synced. The sync is not that bad however, but it could be better.
raylad 16 hours ago [-]
Feedback on the text: I find the way that the text generates randomly across the line very distracting because I (and I think most people) read from left to right. Having letters appear randomly is much more difficult to follow.

Are there options to have the text appear differently?

dfee 15 hours ago [-]
From the video

> The Al needs to figure out not just what to draw, but precisely when to draw it

;)

ishita159 23 hours ago [-]
Planning to add links as input anytime soon?

I would love to add a link to my product docs, upload some images and have it generate an onboarding video of the platform.

skar02 23 hours ago [-]
Yes, very soon. We already support this via API and will add to our platform too!
skar01 20 hours ago [-]
Our API is currently available to our enterprise customers!
skar01 23 hours ago [-]
Hey also, if you want to suggest a video, we could try generating one and reply here with a link! Just tell us what you want the video to be about!!
cube2222 23 hours ago [-]
Hey, kudos for the product / demo on the website - it managed to keep me engaged to watch it till the end.

I’m mostly curious how it fairs with more complex topics and doing actually informative (rather than just “plain background”) illustrations.

Like a video explaining transformer attention in LLMs, to stay on the AI topic?

skar01 22 hours ago [-]
Yeah so it actually does pretty well. Here are some sample videos:

https://www.youtube.com/watch?v=33xNoWHYZGA&t=1s

https://www.youtube.com/watch?v=w_ZwKhptUqI

andhuman 11 hours ago [-]
Could you do a video about latent heat?
WasimBhai 23 hours ago [-]
I have 2 credits but it won't let me generate a video. Founders, if you are around, you may want to debug.
skar02 22 hours ago [-]
Huh, that's odd. Could you DM me your email?
skar01 22 hours ago [-]
Or just email us at founders@golpoai.com
OG_BME 22 hours ago [-]
I created a video on the free tier, the shareable link didn't work (404), I upgraded to be able to download it, and it seems to have disappeared? It says "Still generating" in my Library.

The video UUID starts with "f5fbd6c7", hopefully that's sufficient to identify me!

skar02 22 hours ago [-]
Sorry about that! I found your video. Should I link it here or DM it to you (can you do DM in Hacker News?) ? You could also email me at shreyas2@stanford.edu, and I can send it there
dang 21 hours ago [-]
(No DMs on HN, at least not yet)
OG_BME 21 hours ago [-]
Just emailed you! Thanks.
android521 11 hours ago [-]
Do you have a developer api that empowers developers to create explainer videos?
mandeepj 16 hours ago [-]
Congrats on the launch!

If I may ask - how do you generate your audio?

ActVen 19 hours ago [-]
Popup window with "Load Failed" after it had some progress on the bar past 40% or so. Shows up in the library, but won't play. I just deleted it for now.
skar01 19 hours ago [-]
Could you try again?
ActVen 19 hours ago [-]
Just tried on Chrome instead of safari and it worked this time. Thanks and congrats on the launch!
skar01 18 hours ago [-]
Thank you!
UltraSane 12 hours ago [-]
Impressive. Reminds me of Google NotebookLLMs AI generated podcasts of PDFs.
poly2it 23 hours ago [-]
The creator tier ($99.99/mo) lists "15 seconds" as a perk. Does this mean the maximum video length is 15 seconds?
skar02 23 hours ago [-]
One of the founders here! No it's not. The max video length is up to 2 min, which is also the case in any non-free tier. We just include a 15-second option for that tier (because people it need for things like FB ads)
poly2it 20 hours ago [-]
Maybe clarify it a bit. Eg. "Short 15 second option".
BugsJustFindMe 19 hours ago [-]
In the post you talk about 5–10 minute explainers.

What does one do if they want to make a 5-10 minute explainer if the maximum length is 2 minutes?

bangaladore 23 hours ago [-]
Given that the next tier up is "Create longer/more detailed video (up to 4 min long)", I'd guess you are right.

Seems like this is pretty useless unless you pay 200$ per month. Which may be a reasonable number for the clearly commercial / enterprise use case, but I'm just not certain what you can do wtih the lower tiers.

atleastoptimal 14 hours ago [-]
someone needs to do something about the purple darkmode rounded corner tailwind style that has infected all LLMs now.

cool product though!

KaoruAoiShiho 23 hours ago [-]
Did NotebookLM just come out with this? Very tough to compete with google.
empressplay 19 hours ago [-]
Can confirm, it creates slides though, not whiteboard animations. Although the slides are in color and have graphs, clipart, etc. (but they are static and the whiteboard drawing is cooler!)

It created an 8 minute video explaining my Logo-based coding language using 50 sources and it was free.

https://www.youtube.com/watch?v=HZW75burwQc

skar01 19 hours ago [-]
We have color as well and support graphs and clipart
nextworddev 22 hours ago [-]
Has anyone tried prompting VEO to create these videos
skar02 22 hours ago [-]
We have! Veo I believe, can't do more than 8-second videos, and when prompted they aren't very coherent in our experience.
nextworddev 22 hours ago [-]
oh had no idea. will try your product
CalRobert 22 hours ago [-]
So it eats concepts and makes videos?

One is reminded of smbc

https://www.seekpng.com/png/detail/213-2132749_gulpo-decal-f...

skar02 22 hours ago [-]
Haha! The name actually comes from the word story in Bengali.
23 hours ago [-]
subhro 22 hours ago [-]
From one Kar to another, দূর্দান্ত গল্প Congratulations.
skar02 22 hours ago [-]
Thanks!
metalliqaz 23 hours ago [-]
My suggestion would be to re-think the demo videos. I have only watched most of the way into the "function pointers in C" example. If I didn't already know C well, I would not be able to follow that. The technical diagrams don't stay on the screen long enough for new learners to process the information. These videos probably look fantastic to the person who wrote the document it summarizes, but to a newbie the information is fleeting and hard to follow. The machine doesn't understand that the screen shouldn't be completely wiped all the time while it follows the narrative. Some visuals should be static for paragraphs, or stay visible while detail marked up around it. For a true master of the art, see 3blue1brown.
bangaladore 23 hours ago [-]
> For a true master of the art, see 3blue1brown.

I agree. Rather than (what I assume is) E2E text -> video/audio output, it seems like training a model on how to utilize the community fork of manim which 3blue1brown uses for videos would produce a better result.

[1] https://github.com/ManimCommunity/manim/

albumen 21 hours ago [-]
Manim is awesome and I'd love to see that, but it doesn't easily offer the "hand-drawn whiteboard" look they've got currently.
metalliqaz 23 hours ago [-]
So... if I had the enterprise accounts for various LLM services, could I dupe this company with a basic upload page and a nice big prompt?
Wolf_Larsen 23 hours ago [-]
Its not that simple, but it would be straight forward to duplicate the outputs of this with a simple LLM + ffmpeg workflow. They did mention a custom model on the landing page, and if they've trained one then you would be spending much more money on each output than they are. Because without a fine-tuned model there would be a lot of inference done for QA and refinement of each prompt | clip | frame .
MarcelOlsz 20 hours ago [-]
"Custom model" usually translates to "deployed an OSS model and tweaked a few things" like 99% of the time.
Lienetic 23 hours ago [-]
I'm curious - do you feel differently about some of these coding and coding-adjacent tools out there like Cursor and Lovable?
metalliqaz 22 hours ago [-]
no, not really. I think they are massively over-valued but in the tech world... what else is new? I view those tools as mostly a convenience. They are integrating things into nice easy packages to use. That's the value.

With this... eh. Most people don't need to make more than one or two explainer videos, so are they going to take on a new monthly fee for that? And then there are power users who do it all the time, but almost surely have their own workflow put together that is customized to exactly what they want.

At any point, one of the big players could introduce this as a feature for their main product.

ayaros 17 hours ago [-]
In the Khan academy videos I remember watching, an instructor would actually write on a tablet; you'd see each letter get hand-written one by one in order. Is there no way to get it to do that? What the AI is doing instead is building-up the strokes of every character on the line of text all at once, which looks completely unnatural. The awkwardness is compounded by the fact that the letters are outlined, so it takes even more steps to create them.

In addition, the line-art style of the illustrations looks like that same cartoonish-AI-slop style I see everywhere now. I just can't take it seriously.

If this tool is widely deployed it's just going to get used to spread more misinformation. I'm sure it will be great for bad actors and spammers to have yet another tool in their toolbox to spread whatever weird content or messages they want. But for the rest of us, that means search engines and YouTube and other places will be filled with a million AI-generated half-baked inferior copies of Khan Academy. It's already hard enough to find good educational resources online if you don't know where to look, and this will only make the problem worse.

You'll just have to forgive me if I'm not really excited about this tool.

...also the name is a bit weird. It reminds me of "Gulpo, the fish who eats concepts" from that classic SMBC cartoon. (https://www.smbc-comics.com/comic/2010-12-15)

whitepaint 8 hours ago [-]
I've tried it and it is really cool. Well done and good luck.
BoorishBears 20 hours ago [-]
Very cool: what output format is the model producing?

Straight vector paths?

ks2048 14 hours ago [-]
I made it 8 seconds into the "function pointers in C" video and immediately stopped. It went too fast to read the code examples and diagrams. (second "slide" appears for 1 second.. and what is that array it is showing?) If you go back and look at the code (a three line swap function) - it's messed up. No opening bracket and where is the closing bracket? It is "swaps first and last", but hard-coded to only length 3 arrays?

I'm sure AI could help make good animations like this, but this looks like slop.

dangoboydango 15 hours ago [-]
[dead]
orange-tourist 14 hours ago [-]
[flagged]
personjerry 14 hours ago [-]
I feel like this is another case of throwing AI in a non-AI-required problem. Khan Academy itself just hired people to make its videos at a very reasonable wage. Why would you need to add AI into the equation? If you wanted to, you could build a platform of basic video / whiteboard content creators at a very reasonable price point.
wordpad 12 hours ago [-]
You can't have arbitrary content with a human in the workflow.
personjerry 12 hours ago [-]
You can absolutely hire a human to make arbitrary content