Samsara | Product, Infrastructure, Mobile, Site Reliability, Security Engineers | San Francisco, Onsite
Build aws for physical infrastructure.
Samsara was started by the founders of Meraki and has a small, tight-knit engineering team that's quickly growing. We are looking for people who love building and seeing their code get used by customers.
Our backend is powered by golang/graphql/grpc and our frontend applications use react/react native/typescript.
At Samsara our golang graphql implementation (https://github.com/samsarahq/thunder) works this way. We write out resolver functions using plain old go types and use some reflection magic at startup to compute the graphql type information.
Our docs are a bit spare at the moment, but this means with code like:
type User struct {
Id int64
Name string
}
func rootUser(ctx context.Context, args struct { id int64 }) (*User, error) {
// ...
}
schemaRoot.FieldFunc("user", rootUser);
We can derive the graphql SDL types automatically (looking at the arguments and return values) and use them with other graphql tooling, but it saves folks time from writing out a separate schema and keeping it in sync.
I think this is something that graphql implementations are still trying to figure out. At work (samsara.com) we use a time-based batch[0] for our queries to help with n+1 issues in our graphql server.
Separate branches of the the graphql query are handed to independent goroutines. When batchable rpcs (to the database, or other services) occur, we delay execution by ~1ms and wait for any other rpcs of the same type (and keep delaying for up to ~20ms). This works pretty well for our more expensive field resolvers. We've also thought about ways to examine and track the execution of the goroutines (similar to the facebook/dataloader approach), but the time-based batches have worked pretty well for us so far.
We also aggressively cache queries and rpcs so that the same data is not expensive to fetch across different branches of a query.
Clara is building the simplest possible interface to getting work done.
Every person on our team is involved in the thinking that creates their work - full stack in the broadest sense of the term. This means identifying, owning, and driving projects to completion.
We believe shipping early and frequently builds better products. An extreme example: we scheduled thousands of meetings entirely manually for our first Clara customers before building any software at all.
Accepting human dependency is the fastest way to building useful machine intelligence. The failure of intelligence products to date has fundamentally been a failure to build trust. It is the consequence of unreliability and lack of focus (think: Siri). Conversely, Clara has delivered a highly reliable, focused, and useful natural language interface from day one.
We’re looking for frontend, backend, and machine learning engineers to join our early team. Check out our full descriptions for each role [1], and feel free to ping me directly at stephen@claralabs.com if you have any questions!
Clara is building the simplest possible interface to getting work done.
Every person on our team is involved in the thinking that creates their work - full stack in the broadest sense of the term. This means identifying, owning, and driving projects to completion.
We believe shipping early and frequently builds better products. An extreme example: we scheduled thousands of meetings entirely manually for our first Clara customers before building any software at all.
Accepting human dependency is the fastest way to building useful machine intelligence. The failure of intelligence products to date has fundamentally been a failure to build trust due to unreliability and lack of focus (think: Siri). Conversely, Clara has delivered a highly reliable, focused, and useful natural language interface from day one.
We’re looking for frontend, backend, and machine learning engineers to join our early team. Check out our full descriptions for each role [1], and feel free to ping me directly at stephen@claralabs.com if you have any questions!
Clara is building the simplest possible interface to getting work done.
Every person on our team is involved in the thinking that creates their work - full stack in the broadest sense of the term. This means identifying, owning, and driving projects to completion.
We believe shipping early and frequently builds better products. An extreme example: we scheduled thousands of meetings entirely manually for our first Clara customers before building any software at all.
Accepting human dependency is the fastest way to building useful machine intelligence. The failure of intelligence products to date has fundamentally been a failure to build trust. It is the consequence of unreliability and lack of focus (think: Siri). Conversely, Clara has delivered a highly reliable, focused, and useful natural language interface from day one.
We’re looking for frontend, backend, and machine learning engineers to join our early team. Check out our full descriptions for each role [1], and feel free to ping me directly at stephen@claralabs.com if you have any questions!
/r/books with its 3 million subscribers is really, really bad.
/r/literature is much smaller and therefore better. Way more interesting content, but still extremely entry-level though. And plagued with stereotypical redditisms (one of the top posts right now is Tao Lin translated to Latin).
Try looking for small genre-specific subreddits, like /r/printSF for science fiction books etc. More focused and insightful discussion seems strongly inversely correlated with subreddit size.
I've curated my list of Facebook friends to be something like that. The list of real-life friends who make up a core part of my list of Facebook friends includes a lot of people from a membership organization gifted children, whose parents then form a mutual support network. This, to be sure, is a hard strategy to replicate exactly, but over time I think that your professional or other affinity groupings will help you find people who like to discuss books you like thoughtfully.
I get good book recommendations here on HN all the time, most recently a set of books about German history,[1] but we hardly ever have extended book discussions here.
Would you pay for this? Like a couple of bucks per discussion or something?
I've thought about the same thing before but can't figure out how economics would work. Everyone expects things to be free free free nowadays.
I think it could grow into something really cool where you could have authors & other respected thought-leaders participate and have very deep, insightful conversations.
i think, psychologically, i wouldn't value something i had to pay for. i'd feel more like a "consumer" than a participant. (and from an economic standpoint, the users would be generating a lot of the value but capturing none of it)
it's just a psychological thing. i'm sure different people have different psychologies around it. (and yes, i'd probably value my house more if i were not paying rent on it)
as for capturing value, i'm referring specifically to the monetary aspect of things - assuming the company adds value by providing the platform and gets paid for it and the users add value by providing the content but do not get paid for it, the explicit introduction of money into the ecosystem has introduced a situation where one party is seen to be asymmetrically rewarded for the overall success of the ecosystem.
This is also an issue for me. The answer is a book group if you can find like minded individuals, or people willing to read. Too many people just don't read anything any more so the possibility of just 'running into someone at a party' who does read has been greatly reduced. It might make for an interesting meetup at a maker space I suspect.
If the books you want to discuss are still under active copyright (Stephen King, Cormac McCarthy, etc) instead of the public domain (e.g. Mark Twain, Shakespeare, etc), how do you envision the shared annotations feature to work?
For example, google books doesn't show pages 43-44 (and many other pages are missing) in Blood Meridian:
It seems that to create a website to fulfill your idea, we would need to have a blanket license to not only store 100% of the text digitally on the servers' harddrives but to also display any part of the complete text to all members so they can annotate it. A giant like Google Inc. was not able to get such terms from publishers.
I don't think you need copyright-free access to the whole book. Interesting quotes/passages are few and should be OK to provide under fair use (e.g. seehttp://onlinebooks.library.upenn.edu/fair use-explain.html).
Well, I wasn't limiting it to meme-friendly fragments such as "To be or not to be" from Hamlet. To go back to the premise mentioned by theswan, it was "good college-level literature course".
That means most of the text like Hamlet is discussed and annotated front to back. A literary guide such as Norton Critical edition of Hamlet will have annotations for every single line of the play. Hamlet is easy to digitize into RapGenius because it's 400 years old and public domain.[1],[2]
For a recent book still under copyright such as Twilight or Harry Potter, the rabid fans could conceivably want to discuss every page of the book. Therefore, a thousand fans "sharing annotations" leads to reconstructing the entire book. If the entire book isn't presented by the website to annotate, what exactly would they be annotating?
For literary and difficult books such as Ulysses by James Joyce, the entire book begs to be annotated. If a permissive license doesn't exist to present 100% of book for thousands of professors and students to share annotations, I'm not sure what the value is.
What do those stored annotations point to if members are providing the books?
This isn't RapGenius where all of the lyrics and annotations are shown on the same screen.
If the data structure of the stored annotations includes "pointers" to a specific book, what does it "point" to? A page number? Many epub/mobi books don't have absolute page numbers. For dead tree books, even the page numbers can change between the printing of the 1st hardback cover to the 2nd paperback edition.
When I think of "shared annotations", I'm thinking of virtual comments written in the margins of a book that anyone else can see. How would members "provide" books for that scenario? Upload epub & mobi files?
I guess it would be easier to quote (copy&paste) a particular passage a book and then follow up with some commentary but that type of thing can already be done today in any book discussion forum. To me, that's just quoting/citing and not annotating.
For epubs from the same source, epubcfi[1] can be used to point to a passage. It is somewhat resilient to editing, as long as the gross structure of the document remains the same. (Parent tags, file name, ids.) iBooks uses this standard for annotations, but doesn't expose it anywhere.[2] I don't know if any other book readers use epubcfi for annotations.
I think mobi files loose enough structure in translation that it wouldn't be possible to locate an epubcfi pointer within them. However, it might work with the newer "format 8" files, if they are converted from an epub. (I haven't investigated how much structure is lost.)
[2]: some records in my database don't have an epubcfi, but I haven't investigated whether they're user-created annotations or internal ones. (e.g. current location)
This could presumably be handled similar to how patch does it. When you make an annotation, you store an approximate location and say a hundred words of context. When someone loads your annotation, even if they have a slightly different edition, you can look for almost-the-same text, searching outwards from an appropriate point.
The problem there probably wouldn't be the node.js code, rather, the library has a dependency for mp3 encoding with LAME (used on the Sonos end for streaming), which I've found hardware like the raspberry pi sometimes has trouble with.
Build aws for physical infrastructure.
Samsara was started by the founders of Meraki and has a small, tight-knit engineering team that's quickly growing. We are looking for people who love building and seeing their code get used by customers.
Our backend is powered by golang/graphql/grpc and our frontend applications use react/react native/typescript.
Apply/more info at https://www.samsara.com/jobs