Launcher.so – The interface to AI isn't a chat window, it's a keystroke

launcher.so

1 points by deep933 8 hours ago

Hi HN,

I've been building [*Launcher.so*](https://launcher.so/) — an AI-first launcher that lives in your OS and lets you interact with models using selected *text, selected files, screenshots, or voice* — all triggered with a single keystroke.

It started from frustration: AI tools are powerful, but they live in tabs, web UIs, IDEs or CLI wrappers. I didn’t want to open a chat window every time I needed quick insight or to automate something.

*So I built a keystroke-triggered interface that works like this:*

- Press Option + Space (or any shortcut) - Select text, file, take a screenshot, or speak - Choose a model (O4, Sonnet 4, Flash 2.5, Ollama etc) - Get an answer — or run an action (like “send to Notion” or “create GitHub issue”)

It also supports [Model Context Protocol (MCP)](https://modelcontextprotocol.org/), so external tools (GitHub, Notion, Slack, etc.) can be plugged in and called directly.

I'm still a week away from the early release, but I'm sharing early to get feedback:

- Would this solve anything you’ve hit in your own workflow? - Does “keystroke, not chat window” resonate with you? - What would you expect from a tool like Launcher.so?

[Join the waitlist](https://launcher.so/) if you’re curious

Or just share thoughts — I’d love your input.

Thanks,

*Deep*

pvg 8 hours ago

You could post this when you're ready for people to try it but waitlists are mostly offtopic on HN.