Hacker Newsnew | past | comments | ask | show | jobs | submit | beklein's commentslogin

https://www.spacex.com/updates#xai-joins-spacex additionally the longer article on SpaceX site

This will actually work well with my current workflow: dictation for prompts, parallel execution, and working on multiple bigger and smaller projects so waiting times while Codex is coding are fully utilized, plus easy commits with auto commit messages. Wow, thank you for this. Since skills are now first class tools, I will give it a try and see what I can accomplish with them.

I know/hope some OpenAI people are lurking in the comments and perhaps they will implement this, or at least consider it, but I would love to be able to use @ to add files via voice input as if I had typed it. So when I say "change the thingy at route slash to slash somewhere slash page dot tsx", I will get the same prompt as if I had typed it on my keyboard, including the file pill UI element shown in the input box. Same for slash commands. Voice is a great input modality, please make it a first class input. You are 90% there, this way I don't need my dictation app (Handy, highly recommended) anymore.

Also, I see myself using the built in console often to ls, cat, and rg to still follow old patterns, and I would love to pin the console to a specific side of the screen instead of having it at the bottom and pls support terminal tabs or I need to learn tmux.


So much this. I'm eagerly waiting to see what anthropic and OpenAI do to make dictation-first interaction a first class citizen instead of requiring me to use a separate app like Super Whisper. It would dramatically improve complex, flow-breaking interactions when adding files, referencing users or commands, etc.

Importantly I want full voice control over the app and interactions not just dictating prompts.


Would love to see the original prompt for Nano Banana from OP somewhere. One that yields decent results, for me, is:

{ "image_generation_prompt": { "subject_focus": { "primary": "Architectural exterior scene", "constraint": "Strictly preserve original building geometry, facade details, and structural layout", "reference_adherence": "High structural fidelity to input image" }, "environment_and_season": { "season": "Late November, very late autumn", "weather": "Post-rain, overcast, gloomy, high humidity", "sky": "Heavy grey cloud cover, diffuse white/grey light, no direct sunlight", "ground_texture": "Wet asphalt/pavement, highly reflective puddles, wet concrete, scattering of wet brown decaying leaves" }, "vegetation_details": { "trees": "Leafless branches, dormant skeletal trees, sparse lingering brown foliage", "color_palette": "Desaturated greens, browns, greys, russet, damp earth tones", "state": "Winter-ready, wet bark, dormant landscaping" }, "human_element": { "density": "Sparse, minimal crowd", "clothing": "Heavy winter coats, scarves, boots, muted colors", "activity": "Walking briskly to avoid cold, holding closed wet umbrellas, hurrying, heads down against the wind", "mood": "Solitary, cold, urban transit" }, "photographic_style": { "medium": "Realistic architectural photography", "camera": "35mm lens, sharp focus on architecture", "tone": "Cinematic, moody, desaturated, cool color temperature, blue-grey tint", "quality": "8k resolution, high dynamic range, hyper-realistic textures" } } }


It's on Lovable so you can just fork it and take a look (the prompt is in supabase/functions/transform-render/index.ts):

Transform this idealized architectural rendering into the most brutally realistic, depressing photograph possible. This is the WORST CASE scenario - what the building will actually look like in reality:

- Set on a dreary, grey, overcast late November day with flat, lifeless lighting - The sky is a uniform dirty grey, threatening rain - All trees are completely bare - just skeletal branches against the grey sky - The landscaping is dead, muddy, or non-existent. No lush gardens, just patchy brown grass and bare dirt - Remove ALL people, the scene should feel empty and abandoned - Any water features should look stagnant and grey - Add realistic weathering, dirt streaks, and construction residue on the building - The building materials should look how they actually appear, not the idealized clean version - Include visible utility boxes, drainage grates, and other mundane infrastructure usually hidden in renders - The overall mood should be bleak but realistic - this is what buyers will actually see on a random Tuesday in late autumn - Maintain the exact building, angle, and composition, just strip away all the marketing polish

The goal is honest truth, not beauty. Show what the architect's client will actually see when they visit the site.


>> Remove ALL people, the scene should feel empty and abandoned

That really captures the vibe in Kendall square on the weekend, but for maximum "honest truth" there should be double-parking, delivery trucks and ubers stuck in traffic waiting on a thousand people to scurry across the street from the subway entrance, huddling against the cold. Some dirty snowbanks and grey slush puddles in the crosswalks would really nail it.


Thank you! Learned something new today. Will try to look out for this trick on other Lovable sites I will stumble upon.

As far as I can tell, they say: "Mission control and data distribution are managed by EUMETSAT." They have published their own blog post here: https://www.eumetsat.int/features/see-earths-atmosphere-neve...

There they say that: "Observations made by MTG-S1 will feed into data products that support national weather services …". So I guess there will be no simple, publicly available REST API or so... but if anybody finds anything, let us know here :)



nice find. so you need a client_id to access the API

For the datasets, I tried to access (like the full disc image in visible wavelength, MTG 0 degree), it is sufficient to register at eumetsat to get a username and password. The eumdac python tool is probably the easiest way to access the data:

https://pypi.org/project/eumdac/

(If you do not want to use python, the --debug option is quite useful to see exactly the request made. The output is either some JSON metadata or a large zip with the netcdf data)


Read the data store user guide. You have to register.

Most weather data isn't generally available by easy to query REST API's (at least not at the original sources). One side project I had I wanted to use NOMADs data, and it was quite a grind downloading and processing the raw datasets into something usable at an application level (or viable to expose via an API).

That’s why you have service/products that have the sole purpose of taking all these region specific data sources and processing them in to a generic json api.

The government orgs probably do it intentionally so they don’t have ten million devices pinging their servers to update weather widgets.


Check out open-meteo. They’ve got pretty extensive historical and forecast weather apis in easy to consume formats. https://open-meteo.com/en/features#available_apis

The Latent Space podcast just released a relevant episode today where they interviewed Kevin Weil and Victor Powell from, now, OpenAI, with some demos, background and context, and a Q&A. The YouTube link is here: https://www.youtube.com/watch?v=W2cBTVr8nxU

oh i was here to post it haha - thank you for doing that job for me so I'm not a total shill. I really enjoyed meeting them and was impressed by the sheer ambition of the AI for Science effort at OAI - in some sense I'm making a 10000x smaller scale bet than OAI on AI for Science "taking off" this year with the upcoming dedicated Latent Space Science pod.

generally think that there's a lot of fertile ground for smart generalist engineers to make a ton of progress here this year + it will probably be extremely financially + personally rewarding, so I broadly want to create a dedicated pod to highlight opportunities available for people who don't traditionally think of themselves as "in science" to cross over into the "ai for hard STEM" because it turns out that 1) they need you 2) you can fill in what you don't know 3) it will be impactful/challenging/rewarding 4) we've exhausted common knowledge frontiers and benchmarks anyway so the only* people left working on civilization-impacting/change-history-forever hard problems are basically at this frontier

*conscious exaggeration sorry


Wasn't aware you're so active on HN; sorry for stealing your karma.

Love the idea of a dedicated series/pod where normal people take on hard problems by using and leveraging the emergent capabilities of frontier AI systems.

Anyway, thanks for pod!


not at all about stealing karma, i dont care much about fake internet points.

yes you got the important thing!


Hope you like it :D I'm here if you have questions, too

I wonder if electrical engineers felt this way about the reliability of the silicon crystal lattice their circuits rely upon…

Anthropic posted an AMA style interview with Amanda Askell, the primary author of this document, recently on their YouTube channel. It gives a bit of context about some of the decisions and reasoning behind the constitution: https://www.youtube.com/watch?v=I9aGC6Ui3eE

Not the author/contributor, but the app is built using Tauri for easy multi-platform support, so the backend logic is implemented in Rust and the frontend UI is implemented in TypeScript. I think it’s a valid choice. GitHub does not include any model _code_ in the stats; the models will be downloaded separately the first time you use them. Hope this helps.

I know many people hate sites like this, but I actually like them for these use cases. You can get a quick, LLM-generated overview of the architecture, e.g. here: https://codewiki.google/github.com/cjpais/handy


Most of my complex documents are, luckily, Markdown files.

I can recommend https://github.com/tobi/qmd/ . It’s a simple CLI tool for searching in these kinds of files. My previous workflow was based on fzf, but this tool gives better results and enables even more fuzzy queries. I don’t use it for code, though.


Given that preface, I was really expecting that link to be a grepping tool rewritten in golang or something, or perhaps customised for markdown to weigh matches in "# heading title"s heavier for example


Here's a rust one: https://github.com/BeaconBay/ck

I haven't used it extensively, but semantic grep alone was kind of worth it.


Right, I should have said Rust. Golang is so 2017!


Thanks for the post and the explanation.

I really enjoyed this relevant article about prompt caching where the author explained some of the same principles and used some additional visuals, though the main point there was why KV cache hits makes your LLM API usage much cheaper: https://ngrok.com/blog/prompt-caching/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: