AI & ML interests

The AI community building the future.

Recent Activity

Articles

AdinaY 
posted an update 4 days ago
AdinaY 
posted an update 7 days ago
view post
Post
4431
Finch 💰 an enterprise-grade benchmark that measures whether AI agents can truly handle real world finance & accounting work.

FinWorkBench/Finch

✨ Built from real enterprise data (Enron + financial institutions), not synthetic tasks
✨ Tests end-to-end finance workflows
✨ Multimodal & cross-file reasoning
✨ Expert annotated (700+ hours) and genuinely challenging hard
angt 
posted an update 13 days ago
view post
Post
2594
installama.sh at the TigerBeetle 1000x World Tour !

Last week I had the chance to give a short talk during the TigerBeetle 1000x World Tour (organized by @jedisct1 👏 ) a fantastic event celebrating high-performance engineering and the people who love pushing systems to their limits!

In the talk, I focused on the CPU and Linux side of things, with a simple goal in mind: making the installation of llama.cpp instant, automatic, and optimal, no matter your OS or hardware setup.

For the curious, here are the links worth checking out:
Event page: https://tigerbeetle.com/event/1000x
GitHub repo: https://github.com/angt/installama.sh
Talk: https://youtu.be/pg5NOeJZf0o?si=9Dkcfi2TqjnT_30e

More improvements are coming soon. Stay tuned!
  • 1 reply
·
angt 
posted an update 19 days ago
view post
Post
1668
I'm excited to share that https://installama.sh is up and running! 🚀

On Linux / macOS / FreeBSD it is easier than ever:
curl https://installama.sh | sh


And Windows just joined the party 🥳
irm https://installama.sh | iex

Stay tuned for new backends on Windows!
angt 
posted an update 24 days ago
view post
Post
427
🚀 installama.sh update: Vulkan & FreeBSD support added!

The fastest way to install and run llama.cpp has just been updated!

We are expanding hardware and OS support to make local AI even more accessible. This includes:

🌋 Vulkan support for Linux on x86_64 and aarch64.
😈 FreeBSD support (CPU backend) on x86_64 and aarch64 too.
✨ Lots of small optimizations and improvements under the hood.

Give it a try right now:
curl angt.github.io/installama.sh | MODEL=unsloth/Qwen3-4B-GGUF:Q4_0 sh
angt 
posted an update about 1 month ago
view post
Post
1985
One command line is all you need...

...to launch a local llama.cpp server on any Linux box or any Metal-powered Mac 🚀

curl angt.github.io/installama.sh | MODEL=unsloth/gpt-oss-20b-GGUF sh


Learn more: https://github.com/angt/installama.sh
cgeorgiaw 
posted an update about 1 month ago
badaoui 
posted an update about 1 month ago
view post
Post
418
Building high-performance, reproducible kernels for AMD ROCm just got a lot easier.

I've put together a guide on building, testing, and sharing ROCm-compatible kernels using the Hugging Face kernel-builder and kernels libraries; so you can focus on optimizing performance rather than spending time on setup.

Learn how to:

- Use Nix for reproducible builds
- Integrate kernels as native PyTorch operators
- Share your kernels on the Hub for anyone to use with kernels.get_kernel()

We use the 🏆 award-winning RadeonFlow GEMM kernel as a practical example.

📜 Check out the full guide here : https://huggingface.co/blog/build-rocm-kernels