Hacker Newsnew | past | comments | ask | show | jobs | submit | FloorEgg's commentslogin

I agree generally with what you're saying as a good rule, I would just add one exception.

If you've seen multiple doctors, specialists, etc over the span of years and they're all stumped or being dismissive of your symptoms, then the only way to get to the bottom of it may be to take matters into your own hands. Specifically this would look like:

- carefully experimenting with your living systems, lifestyle, habits, etc. best if there are at least occasional check-ins with a professional. This requires discipline and can be hard to do well, but also sometimes discovers the best solutions. (Lifestyle change solves problem instead of a lifetime of suffering or dependency on speculative pharmaceuticals)

- doing thoughtful, emotionally detached research (reading published papers slowly over a long time, e.g. weeks, months) also very hard, but sometimes you can discover things doctors didn't consider. The key is to be patient and stay curious to avoid an emotional rollercoaster and wasting doctor time. Not everyone is capable of this.

- going out of your way to gather data about your health (logging what you eat, what you do, stress levels, etc. test home for mold, check vitals, heart rate variability, etc.)

- presenting any data you gathered and research you discovered that you think may be relevant to a doctor for interpretation

Again, I want to emphasize that taking your health matters into your own hands like this only makes sense to do after multiple professionals were unhelpful AND if you're capable of doing so responsibly.


I agree with you.

I would just add that I've noticed organizations tend to calcify as they get bigger and older. Kind of like trees, they start out as flexible saplings, and over time develop hard trunks and branches. The rigidity gives them stability.

You're right that there's no way they could have gotten to where they are if they had prioritized data integrity and formal verification in all their practices. Now that they have so much market share, they might collapse under their own weight if their trunk isn't solid. Maybe investing in data integrity and strongly typed, functional programming that's formally verifiable is what will help them keep their market share.

Cultures are hard to change and I'm not suggesting an expectation for them to change beyond what is feasible or practical. I don't lead an engineering organization like it so I'm definitely armchairing here. I just see some of the logic of the argument that them adopting some of these methods would probably benefit everyone using their services.


Aren't you agreeing with his point?

The process of evolution distilled down all that "humongous" amount to what is most useful. He's basically saying our current ML methods to compress data into intelligence can't compare to billions of years of evolution. Nature is better at compression than ML researchers, by a long shot.


Sample efficiency isnt the ability to distill alot of data into good insights. Its the ability to get good insights from less data. Evolution didnt do that it had a lot of samples to get to where it did


> Sample efficiency isnt the ability to distill alot of data into good insights

Are you claiming that I said this? Because I didn't....

There's two things going on.

One is compressing lots of data into generalizable intelligence. The other is using generalized intelligence to learn from a small amount of data.

Billions of years and all the data that goes along with it -> compressed into efficient generalized intelligence -> able to learn quickly with little data


"Are you talking past me?"

on this site, more than likely, and with intent


>Aren't you agreeing with his point? ... Nature is better at compression than ML researchers, by a long shot.

What I mean is basically the opposite. Nature not better as in more efficient. It just had a lot more time and scale to do it in an inefficient way. The reason we're learning quickly is that we can leverage that accumulated knowledge, in a manner similar to in-context learning or other multi-step learning (bulk of the training forms abstractions which are then used by the next stage). It's really unlikely we have some magical architecture that is fundamentally better than e.g. transformers or any other architecture at sample efficiency while having bad underlying data. My intuition is there might even be a hard limit to that. Multi-stage bootstrap might be the key, not the architecture.

Same for the social process of knowledge transfer/compression.


South Park isn't for everyone, but they covered this pretty well recently with Randy Marsh going on a sycophant bender.


Interesting, thanks I’ll check it out.

Yes, the biggest problem with authentic exercises is evaluating the students' actions and giving feedback. The problem is that authentic assessments didnt previous scale (e.g. what worked in 1:1 coaching or tutoring couldn't be done for a whole classroom). But AI can scale them.

It seems like AI will destroy education but it's only breaking the old education system, it will also enable a new and much better one. One where students make more and faster progress developing more relevant and valuable skills.

Education system uses multiple choice quizzes and tests because their grading can be automated.

But when evaluation of any exercise can be automated with AI, such that students can practice any skill with iterative feedback at the pace of their own development, so much human potential will be unlocked.


Most learning curves in the education system today are very bumpy and don't adapt well to the specific student. Students get stuck on big bumps or get bored and demotivated at plateaus.

AI has potential to smooth out all curves so that students can learn faster and maximize time in flow.

I've spent literally thousands of hours thinking about this (and working on it). The future of education will be as different from today as today is to 300 years ago.

Kids used to get smacked with a stick if they spelled a word wrong.


There is a huge opportunity here to have the stick smacking be automated and timed to perfection.


The point is that the education system has come a long way in utilizing STEM to make education more efficient (helping students advance faster and further with less resources) and it will continue to go a long way further.

People thought the threat of physical violence was a good way to teach. We have learned better. What else is there for us to learn? What have we already learned but just don't have the resources to apply?

I've met many educators who have told me stories of ambitions learning goals for students that didn't work because there weren't the time or resources to facilitate them properly.

Often instructors are stuck trading off between inauthentic assessments that have scalable evaluation methods or authentic exercises that aren't feasible to evaluate at scale and so evaluation is sparse, incomplete or students only receive credit for completion.


Write it in something like Google docs that tracks changes and then share the link with the revision history.

If this is insufficient, then there are tools specifically for education contexts that track student writing process.

Detecting the whole essay being copied and pasted from an outside source is trivial. Detecting artificial typing patterns is a little more tricky, but also feasible. These methods dramatically increase the effort required to get away with having AI do the work for you, which diminishes the benefit of the shortcut and influences more students to do the work themselves. It also protects the honest students from false positives.


Thought it is a good idea at first, but can easily be defeated with typing out AI contents. One can add pauses/deletions/edits or true edits from joining ideas different AI outputs.


> Detecting artificial typing patterns is a little more tricky, but also feasible.

Keystroke dynamics can detect artificial typing patterns (copying another source by typing it out manually). If a student has to go way out of their way to make their behavior appear authentic then it's decreasing advantage of cheating and less students will do it.

If the student is integrating answers from multiple AI responses then maybe that's a good thing for them to be learning and the assessment should allow it.


It's not just typing patterns though, it's also how much editing you do, what kinds of edits, where you pause and such.

Manually re-typing another source is something these tools were originally designed to detect. The original issue was "essay mills", not AI.


It will take 0 time to have some (smarter?) student create an AI agent that mimick keystrokes.


Not 0 time, but yes, integrity preservation is an arms race.

The best solutions are in student motivations and optimal pedagogical design. Students who want to learn, and learning systems that are optimized for rate of learning.


That's the best solution. The easiest solution is to move away from homework and into classwork.

In some narrow contexts that is easy, but in many other contexts that is not easy, or doesn't actually solve it.

online programs, limited infrastructure, dishonest students exploiting accessibility programs, are some examples where it's easier to say than do what you're suggesting.

Also AI can help students cheat in class too. Smart glasses, pens with cameras and LED screens on them (yes really), or just regular smart phones. Even switching to pen and paper won't reduce the ease of access.

Instructors don't want to police cheating, they want to teach (or do research). Either way, they don't want to police.

Students cheat when they think what they're learning is low value, the learning process is too clunky, or they place too high a value on the grade. All these imbalances can be improved with better pedagogy.

The only enduring way to actually solve the cheating crisis isn't to make it harder, it's to reduce the value of cheating. Everything else is either temporary or performative.


No, a genuine doc will have a drafting process. You'll edit and change weak parts, etc.

I guess you could use AI to guide this, at which point it's basically a research tool and grammar checker.


Depends how you work. I've rarely (never?) drafted anything and almost all of the first approach ended up in the final result. It would look pretty close to "typed in the AI answer with very minor modifications after". I'm not saying that was a great way to do it, but I definitely wouldn't want to be failed for that.


There is a fractal pattern between authentic and inauthentic writing.

Crude tools (like Google docs revision history) can protect an honest student who engages in a typical editing process from false allegations, but it can also protect a dishonest student who fabricated the evidence, and fail to protect an honest student who didn't do any substantial editing.

More sophisticated tools can do a better job of untangling the fractal, but as with fractal shaped problems the layers of complexity keep going and there's no perfect solutions, just tools that help in some situations when used by competent users.

The higher Ed professors who really care about academic integrity are rare, but they are layering many technical and logistical solutions to fight back against the dishonest students.


I don't mean formal multiple drafts. Even just editing bits, moving stuff around.

I guess some people can type out a 5,000 word assignment linearly from start to finish in 2 hours at 40wpm but that's both incredibly rare and easy to verify upon further investigation.


You got it.

Not really, also the timing of the saves won't reflect the expected work needing to be put in. Unless you are taking the same amount of time to feed in the AI output as a normal student used to actually write / edit the paper, at which point cheating is meaningless


The way I've always thought of this is there are potentials for interactions and interactions.

Interactions act like point particles and potentials for interactions act like waves.

Arguing over the distinction is a bit like debating whether people are the things they do, or the thing that does things. There is some philosophical discussion to be had, but for the most part it doesn't really matter.


I'm not author of parent.

My impression of the joke is that intelligent and knowledgeable people willingly engage with social media and fall into treating what they see as truth, and then are shocked when they learn it's not truth.

If the allegory of the cave is describing a journey from ignorant and incorrect beliefs to enlightened realizations, the parent is making a joke about people going in reverse. Perhaps they have seen first hand someone who is educated, knowledgeable and reasonable become deceived by social media, casting away their own values and knowledge for misconceptions incepted into them by persistent deception.

I'm not saying I agree entirely with the point the joke is making but it does sort of make sense to me (assuming I even understand it correctly).


> intelligent and knowledgeable. people willingly engage with social media and fall into treating what they see as truth, and then are shocked when they learn it's not truth.

I also see this with AI answers relying on crap internet content.


Most content on the internet has been optimized to get attention, not to represent truth.

AI trained on most content will be filled with misconceptions and contradictions.

Recent research has been showing that culling bad training data has a huge positive impact on model outputs. Something like 90% of desirable outputs comes from 10% of the training data (forget the specifics and don't have time to track down the paper right now)

I really hope that AI business models don't fall into relying on getting and keeping attention. I also hope the creators of them exist in a win-win relationship with society as a whole. If AIs compete with each other based on which best represent truth, then overall things could get a lot better.

The alternative seems dreadful.

Edit: I am curious why this is getting downvoted.


A small number of samples can poison LLMs of any size

https://www.anthropic.com/research/small-samples-poison

It was discussed a month or so back.

https://news.ycombinator.com/item?id=45529587


Yeah I saw that one too, which I would think supports my point that distilling down training data would lead to more truth aligned AI.

I mean it's also just the classic garbage in garbage out heuristic, right?

The more training data is filtered and refined, the closer the model will get to approximating truth (at least functional truths)

It seems we are agreeing and adding to each other's points... Were you one of the people who downvoted my comment?

I'm just curious what I'm missing.


Good take. I think someone said (might've been Elon) that building an AI but limiting its training material to material from 1870-1970 would avoid a lot of this, as arguably that was the period of greatest advancement of humankind that is not spoiled by bad data, with no social networks and everything having printed having needed to have had more meaning behind it as it took more effort.

It would be VERY refreshing to see more than one company try to build an LLM that is primarily truth-seeking, avoiding the "waluigi problem". Benevolent or not, progress here should not be led just by one man ...


To me it looks like there are many people working to make AI truth seeking, and they are taking a variety of approaches. It seems like as time goes on opportunities to build truth seeking AI will only increase as the technology becomes more ubiquitous and accessible. Like if the costs of training a GPT-5 level LLM drop 10,000x.


I didn't downvote, but it's naive to the point of irresponsibility not to assume and prepare for LLMs being weaponized in the exact way as social media as you alluded to. It's not like human nature or the nature of capitalism has changed recently.


Are you saying that hope is naive and irresponsible?


What you are hoping for will not occur.

Do hope. But hoping for a unicorn is magic thinking.

For other people, they can either count this as a reason to despair, or figure out a way to get to the next best option.

The world sucks, so what ? In the end all problems get solved if you can figure them out.


For decades I have continuously studied physics, chemistry, biology, psychology, history, management science, market research, economics, religion, finance, and computer science among many other things. I study for 4-5 hours on average every day, and the rest of my working hours are spent practicing my craft.

The reason I say this is that blind hope and informed hope are two different things.

Media has always relied on novel fear to attract attention. It's always "dramatized"; sacrificing truth for what sells. However AI is like electricity or computation. People make it to get things done. Some of those things may be media, but it will also be applied to everything else people want to get done. The thing about tools is that if they don't work people won't keep using them. And the thing about lies is that they don't work.

For all of human history people have become more informed and capable. More conveniences, more capabilities, more ideas, more access to knowledge, tools, etc.

What makes you think that AI is somehow different than all other human invention that came before it?

It's just more automation. Bad people will automate bad things, good people will automate good things.

I don't have a problem with people pointing out risks and wanting to mitigate them, but I do have a problem with invalid presuppositions that the future will be worse than the past.

So no, I don't think I'm hoping for a unicorn. I think I'm hoping that my intuition for how the universe works is close enough, and the persistent pessimism that seems to permeate from social media is wrong.


Speaking as someone who has also spent decades both studying and applying STEM and social sciences my commentary is this:

> The thing about tools is that if they don't work people won't use them.

People will and do use tools that don't work. Over time fewer people use bad tools as word spreads. Often "new" bad tools have a halo uptake of popularity.

> And the thing about lies is that they don't work.

History tells us that lies work in the short term, and that is sufficient to force bad decisions that have long shadows.


> The thing about tools is that if they don't work people won't use them.

My bad. I meant won't keep using them.

> History tells us that lies work in the short term, and that is sufficient to force bad decisions that have long shadows.

What do you mean by "work"?

It sounds like you are implying that a lie "works" by convincing people to believe it?

I meant a lie doesn't work in that if you follow the lie you will make incorrect predictions about the future.

If someone acts on a lie which results in a bad decision with a "long shadow" then wouldn't that mean acting out the lie didn't work?


Lies work in the sense that they can persuade large groups of people to take courses of action based on their beleif in those lies.

They are used by bad actors to, say, win elections and then destroy systemic safeguards and monitoring mechanisms that work to spotlight bad actions and limit damage.

There are also lies, such as a common belief in Wagyl, that draw people to together and act in unison as a community to help the less fortunate, preserve the environment and common resources, and other things not generally perceived as destructive.


> Lies work in the sense that they can persuade large groups of people to take courses of action based on their beleif in those lies.

I don't disagree with this. It's reasonable to assume I was talking about that type of "work", but I wasn't.

> There are also lies, such as a common belief in Wagyl, that draw people to together and act in unison as a community to help the less fortunate, preserve the environment and common resources, and other things not generally perceived as destructive.

I am not familiar with this specific culture but I totally get your point. Most religion works like this. I would just consider that the virtues and principles embedded within the stories and traditions are the actual truths that work, and that Wagyl and the specifics of the stories are just along for the ride. The reason I believe this is because other religions with similar virtues and values will have similar outcomes even though the lie they believe in is completely different.

I said that lies destroy, and that wasn't right. Sometimes they do, but as you have pointed out, often they don't.


I applaud your efforts! You stated:

> I really hope that AI business models don't fall into relying on getting and keeping attention. I also hope the creators of them exist in a win-win relationship with society as a whole.

The ratio of total hours of human attention available to total hours of content is essentially 0. We have infinite content, which creates unique pressures on our information gathering and consumption ability.

Information markets tend to consolidate, regulating speech is beyond fraught, and competition is on engagement, not factuality.

Competing on accuracy requires either Bloomberg Terminal levels of payment, or you being subsidized by a billionaire. Content competes with content, factual or otherwise.

My neck of the words is content moderation, misinformation, and related sundry horrors against thought, speech and human minds.

Based on my experience, I find this hope naive.

I do think it is in the right direction, and agree that measured interventions for the problems we face are the correct solution.

The answer to that, for me, is simply data and research on what actually works for online speech and information health.


It feels to me like everyone responding to that comment is irrationally pessimistic. However I keep noticing little mistakes in my own wording that alter the meaning away from my intention, and I can't help but think it's my own fault for not making my point more clear.

> I really hope that AI business models don't fall into relying on getting and keeping attention.

What I really meant is that I hope that the economic pressures on media don't naturally also apply to AI. I do think it's naive to hope that AI won't be used in media to compete for attention, I just don't think it's naive to hope that's not the only economic incentive for its development.

I also hope that it becomes a commodity, like electricity, and spills far and wide outside of the control of any monopoly or oligopoly (beyond the "tech giants"), so that hoping tech giants do anything against their incentive structures is moot. I hope that the pressures that motivate AIs development are overwhelmingly demand for truth, so that it evolves overwhelmingly towards providing it.

If this hope is naive, that would imply the universe favors deception over truth, death over life, and ultimately doesn't want us to understand it. To me, that implication seems naive.

The Bloomberg terminal is an interesting example and I see your point. I guess the question is what information is there a stronger incentive to keep scarce. The thing about Bloomberg terminals are that people are paying for immediate access to brand new information to compete in a near-zero-sum game. Most truth is everlasting insight into how to get work done. A counter example are textbooks.


Well, here’s an example of the blind spots we posses. You, and most people, by default privilege “information”. However, in our current reality, everything is “content”. Information is simply content with a flag.

The commodification is towards the production of content, not information.

Mostly, producers of Information, are producing expensive “luxury goods”, but selling them in a market for disposable, commodified goods. This is why you need to subsidize fact checkers, and news papers.

I believe this is a legacy of our history, where content production was hard and the ratio of information to content was higher.

Consumers of content are solving for not just informational and cognitive needs, they are also solving for emotional needs, with emotional needs being the more fundamental.

Consumers will struggle with so many sources of content, and will eventually look towards bundling or focus only on certain nodes.

Do note - the universe does not need to favor anything for this situation to occur. Deception is a fundamental part of our universe, because it’s part of the predator prey dynamic. This in turn arises out of the inability of any system to perfectly process all signals available to them.

There is always place for predators or prey to hide.


I agree.

I thought of the predator prey frame shortly after posting my last comment.

Maybe it boils down to game theory and cooperation vs competition, and the free energy principle. Competition (favoring deception) puts pressure on cooperation (favoring truth). Simultaneously life gets better at deceiving and at communicating the truth. They are not mutually exclusive.

When entities are locked into long term cooperation, they have a strong bias to communicate truth with each other. When entities are locked into long term competition, they have a strong bias to deceive each other.

Evolution seems to be this dance of cooperation and competition.

When a person is born, overwhelmingly what's going on between cells inside their body is cooperation. When they die, overwhelmingly what happens between cells is competition.

So one way that AI could increase access to truth, is if most relationships between people and AI are locked into long term cooperation. Not like today where it's lots of people using one model from a tech co, but something more like most people running their own on their own hardware.

I've heard people say we are in the "post truth era" and something in my gut just won't accept that. I think what's going on is the power structures we exist in are dying, which is biasing people and institutions to compete more than cooperate, and therefore deceive more than tell the truth. This is temporary, and eventually the system (and power structures) will reconfigure and bias back to cooperation, because this oscillation back and forth is just what happens over history, with a long term trend of favoring cooperation.

So to summarize... Complexity arises from oscillations between competition and cooperation, competition favors deception and cooperation favors telling the truth. Over the long-term cooperation increases. Therefore, over the long-term truth communication increases more than deception.


We are in a post truth era, and discomfort is a side effect of ideology and lack of information.

I’ve been there too, is what I am saying. But, reality is reality, and feeling bad or good about it is pointless beyond a point.

AI cannot increase access to truth. This is also part of the hangover of our older views on content, truth and information.

In your mental mode, I think you should recognize that we had an “information commons” previously, even to an extent during the cable news era.

Now we have a content commons.

The production of Information is expensive. People are used to getting it for free.

People are also now offered choices of more emotionally salient content than boring information.

People will choose the more emotionally salient content.

People producing information, will always incur higher costs of production than people producing content. Content producers do not have to take the step of verifying their product.

So content producers will enjoy better margins, and eventually either crowd out information producers, or buy out information producers.

Information producers must raise prices, which will reduce the market available for them. Further - once information is made, it can always just be copied and shared, so their product does not come with some inherent moat. Not to mention that raising prices results in fewer customers, and goes against the now anachronous techie ethos of “Information should be free”.

I am sure someone will find some way to build a more durable firm in this environment, but it’s not going to work in the way you hoped initially. It will either need to be subsidized, or perhaps via reputation effects, or some other form of protection.

Cooperation is favored if cooperation can be achieved. People will find ways to work together, however the equilibrium point may well be less efficient than alternatives we have seen, imagined or hoped for.

More dark forest, cyberpunk dystopia, than Star Trek utopia.

There’s an assumption of positive drift in your thinking. As I said, this is my neck of the woods, and things are grim.

But - so what? If things are grim, only through figuring it out can it actually be made better.

This is the way the pieces on the board are set up as I see it. If you wish to agency in shaping the future, and not a piece that is moved, then hopefully this explanation will help build new insights and potential moves.


My move is to focus on making it easier for college students to develop critical thinking and communication skills. Smoothing out the learning curves and making education more personalized, accessible, and interactive. I'm just getting started, but so far already helping thousands of students at multiple universities.

There's one thing that I just realized hasn't come up in our discussion yet which has a big impact on my perspective.

Everything in the universe seems built on increasing entropy. Life net decreases entropy locally so that it can net increase it globally. There also seems to be this pattern of increasing complexity (particles, atoms, molecules, cells, multi cells, collectives) that unlocks more and more entropy. One extremely important mechanism driving this seems to be the Free Energy Principle, and the emergent ability to predict consequences of actions. Something about it enables evolution, and evolution enables it.

This perspective is that gives me more confidence that within collectives the future will include more shared truth than the past, because at every level of abstraction and for all known history that has been the long term trend.

Cells get better at modelling their external environment, and better at communication internally.

The reason why I am so confident we are not "post truth" is because lies don't work, not in the sense that people can't be deceived by lies (obviously they can), but dysfunctional lies won't produce accurate predictions. Dysfunctional lies don't help work get done, and the universe seems to be designed for work to get done. There is some force of nature that seems to favor increasingly accurate predictive ability.

Your perspective seems to be very well informed from what feels like the root of the issue, but I think you're missing the big picture. You aren't seeing the forest, just the trees around you. I know you assume the same of me, that I don't see these trees that you see. I believe you, that what you see looks grim. I also agree we need to understand the problems to solve them. I'm not advocating for any lack of action.

Just suggesting that you consider:

- for all history life has gotten better at prediction

- truth makes better predictions than lies

What's more likely? we are hitting a bump in the road that is an echo of many that have come before it, or something fundamental has materially changed the trajectory of all scientific history up until this point?

Your points about the cost of information and the cost of content are valid. In a sense, content is pollution. It's a byproduct of competition for attention.

I can think of a few ways that the costs and addictive nature of content could become moot.

- AI lowers the cost of truth

- Human psychology evolves to devalue content

- economic systems evolve to rebalance the cost/value of each

- legal systems evolve to better protect people from deception

These are just what come to mind quickly. The main point is that these quirks of our current culture, psychology, economic system, technological stage and value system are temporary, not fundamental, and not permanent. Life has a remarkable ability to adapt, and I think it will adapt to this too.

I really appreciate you engaging with me on this so I could spend time reflecting on your perspective. If I ever came across as dismissive I apologize. You've helped me empathize with you and others with the same concerns and I value that. You haven't fundamentally changed my mind, but you gave me a chance to hone my thinking and more deeply reflect on your main points.

It feels like we agree on a lot, we are just incorporating different contexts into our perspectives.


> I know you assume the same of me, that

Nah. I see it more as there was an information asymmetry, on this specific topic, due to our different lived experiences.

I can feasibly provide more nuanced examples of the mechanics at play as I see them. Their distribution results in a specific map / current state of play.

> - Economic systems evolve > - legal systems evolve

These types of evolutions take time, and we are far from even articulating a societal position on the need to evolve.

Sometimes that evolution is only after events of immense suffering. A brand seared on humanity’s collective memory.

We are not promised a happy ending. We can easily reach equilibrium points that are less than humanly optimal.

For example - if our technology reaches a point where we can efficiently distract the voting population, and a smaller coterie of experts can steer the economy, we can reach 1984 levels of societal ordering.

This can last a very long time, before the system collapses or has to self correct.

Something fundamental has changed and humanity will adapt. However, that adaptation will need someone to actually look at the problem and treat it on its merits.

One way to think of this is cigarettes, Junk foods and salads. People shifted their diets when the cost of harm was made clear, AND the benefits of a healthy diet were made clear AND we had things like the FDA AND scientists doing sampling to identify the degree of adulteration in food.

——

> My move is to focus on making it easier for college students to develop critical thinking and communication skills. Smoothing out the learning curves and making education more personalized, accessible, and interactive. I'm just getting started, but so far already helping thousands of students at multiple universities.

How are you doing this?


Hoping that the tech giants will put truth over profit is folly. Hoping that audiences will reject this is viable.


> Hoping that the tech giants will put truth over profit is folly.

I never said that though?

> Hoping that audiences will reject this is viable.

I have no clue what you mean. What is "this" refering to?


The number of otherwise intelligent folks I follow on twitter who occasionally brag or make note of their follower count without realizing 80%+ are bots is way too high.

I think that's by design though. Tolerate bots to get high-value users to participate more after they think real people are actually listening to them.


Iowa city is a gem of a college town. Beautiful, vibrant and really nice people.

Maybe this program wouldn't work everywhere. Makes sense it would work there.


Corvallis, Oregon is another college town where buses are free within the city and also to some other cities including Eugene and McMinnville.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: