Skill at “steering” coding assistants may soon be the quality most sought after in software engineers, systems administrators and the like. It's a balance between a light touch and a firm focus.
Our entire system for measuring the value of cognitive work is denominated in human time. But not even machines realize they’re not on human time. That’s a problem.
We regularly have 92% - 93% participation in federal elections here in Australia. Having one next weekend, and already record numbers of pre-poll votes.
Correction: those that don't enter a polling station. What you do in there is up to you. You can cast a vote, spoil the ballot, cast a "donkey vote" (numbering the options in the order printed), leave the ballot empty, as long as it goes in the box.
De-Risking AI", Wisely AI's latest white paper, highlights and explains five new risks of Generative AI tools: anthropomorphising; malicious and commercially protected training data; hallucinations; privacy, data security and data sovereignty; and prompt attacks. The white paper addresses each in detail, and suggest strategies to mitigate these risks. It's part of our core mission to "help organisations use AI safely and wisely."
Ask yourself if this is work you _want_ to do. What purpose does it serve - not just those you're teaching, but yourself? Can this be something you learn from too? Can it help open doors?
One thing to remember: people value work that they pay for. Something that is given freely has no value. And that's not something that can be changed.
If you do say no, say no as politely as possible. Make the no an opportunity, rather than a closing of a door.