Most importantly, you can often find out WHY a campaign exists with stuff like utm=small-biz.
If they’re a competitor, congrats, you just found a potential niche you may have overlooked. If you look at what events they send to their analytics, you can uncover what they know and don’t know about app usage or ad performance.
Former submarine nuke with a masters in NucE here (it's fun to see us come out of the woodwork for this).
Rods are always in the core. To start a reactor that is shut down (with the rods are all the way on the bottom), you withdraw them slowly until the reactor is self-sustaining. From there, you increase power by increasing steam demand (as described in the parent comment above) and continue raising rods to increase or maintain temperature.
When the reactor is operating at power, the control rods are used primarily to 1) control steady state coolant temperature and 2) provide a safe and reliable way to shut the reactor down quickly (by dropping them to the bottom of the core -- this is called a reactor scram). If you have a short-duration power transient for any reason, you can "shim" the rods in to prevent a power spike that might cause a protective action to occur (you shouldn't really ever have to do this except for during emergency drills).
If the rods were drawn outside of the fuel region at power, they wouldn't be able to absorb any neutrons and wouldn't give you any way to control temperature or power. During some specific maintenance when the reactor is shut down, you sometimes might pull one rod further out for testing.
Your question on uneven burning of fuel is insightful. That can happen, and it's caused by an uneven neutron flux (# of neutrons traveling through a unit surface area per unit time) distribution. The core designers take rod positioning into account when determining how to distribute fuel throughout the core in order to maintain a "flat" flux profile.
thank you so much for the response. one more q if you dont mind: on the "how its made" show they show how the fuel comes from ore, to yellow cake, to pellets in zircon rods, to collections of rods in an assembly.
This is completely safe (compared to spent fuel), but how do you get the reaction started? do you have to "light" it with a neutron source when you're ready to use the fuel for the first time? or do you "light" it with radioactivity from existing fuel? or a neutron reflector?
In How-it's-made they didn't say anything like "the fuel assemblies are shipped to power plants with graphite moderators to prevent unwanted reactions during transit", so obviously there's no danger of an unwanted reaction outside of a reactor. So what kicks it off?
Fresh fuel pellets are “safe” in that they’re not going to kill you immediately, but they’re still fairly radioactive, not just from alpha decay, but from spontaneous fissioning, which produces neutrons. Pile em up and they’ll start a chain reaction all on their own. There’s even geological evidence of natural chain reactions in some uranium ore seams: https://en.wikipedia.org/wiki/Natural_nuclear_fission_reacto...
Sorry for the slow reply -- didn't realize there weren't notifications on HN.
U-235 (the fuel used in naval reactors) does undergo spontaneous fission, but not at a rate high enough so reach criticality. Like one of the other posters mentioned, you can make it easier to achieve and maintain criticality by changing the shape of the core (so that fewer neutrons leak out) but in general you do need a neutron source inside the core that is just always spitting out enough neutrons to help the reactor achieve criticality as the control rods are withdrawn.
Once the core has operated at power for long enough, some core materials become "active" (from irradiation) and may help contribute to the neutron source.
Straight U-235 is fairly safe (iirc), but even then I don't think you'd want to ship the fuel assemblies with any moderator as moderated neutrons are what make fission more likely.
Criticality is a result of geometry. If you modify the geometry (by placing the rods in a reactor, by removing control rods, etc), you can vary the system from subcritical, to critical, to super-critical. No external neutron source necessary.
Over the last two months I've started to try to document and implement some of my learnings (inspired by this post [1]), mostly focusing on toy implementations of fine-tuning methods and some interest prompting tricks. I come from a technical, but non-ML or SWE, background, and spent a lot of the last two years building up my broad computer science skillset while pivoting from military to business.
Most recently, I've rounded up some crowdsourced research on how well GPT-4 Turbo uses its new long context window and even did a a small amount on my own (limited by how much I could justify spending on API usage).
This is one of the areas of LLMs that I find most interesting. So far, I've found simple question-answering over vectorstores to be a lackluster experience. In particular, the more information you embed and stick into the vectorstore, the less useful the system becomes as you are less likely to get the information you're looking for (especially if the users don't understand their queries need to look like the docs the want to ask about.
I haven't had a chance to try out hypothetical embedded docs yet, but I expect they only provide a marginal improvement (especially if QAing over proprietary data or information).
I'd love to see any other interesting, more up-to-date resources anyone has found on this topic. I found this recent paper interesting: https://arxiv.org/abs/2304.11062
> In particular, the more information you embed and stick into the vectorstore, the less useful the system becomes as you are less likely to get the information you're looking for
Can you explain that? I don't follow why it would become less useful
It becomes a ranking problem in a sense. Lots of data that can be the answer and lots of context that “could” be relevant to put into the context window but then you have to pick the right context and answer with the most correct information which becomes less clear as your dataset increases.
This is it. One of the "apps" I built was a slackbot for my classmates (in business school) that allows users to upload docs via slack (course notes, cases, etc.) that get embedded and you can then QA over in slack. I also added lots of hard-to-find or disparate information from our school like course reviews, registration information, calendars, etc. so we could all access it from one place.
The problem is once there are 10, 20, 30 different-but-similar documents in the vectorstore (like business school case studies), then asking the bot "what are the key takeaways from the airbnb case" grabs a bunch of useless embedded documents to provide as context. Yes, I can tell users how to ask better questions but it's a bad user experience and nobody stick with it or tries to understand why their queries don't work.
I could use hypothetical document embeddings but the problem is a lot of the cases or course notes are proprietary or not publicly available, so I would guess that the hypothetical answers the LLM would come up with won't provide much better context.
This was built with langchain + pinecone.
edit: I think smarter people than I are working on a lot of better ways to do this, but I think one potential solution is to apply metadata to each document when embedding it (e.g., ask the LLM to apply any number of X preset metadata tags) and then, when retrieving context from the vectorstore, filtering the results by those tags.
> The problem is once there are 10, 20, 30 different-but-similar documents in the vectorstore
Sounds like a de-duping problem. Maybe use vector embeddings to find near identical documents and limit them in the context. i.e. maximize the vector distance between your context sources.