Persona of a person wanting to study AWS for commercial projects at one of AWS's biggest clients:
- Create account. Enter credit card details, but verification SMS never shows up. Ask for help.
- I get called at night (I'm abroad) by an American service employee, we do verification over the phone.
- Try to get the hang of things myself. Lost in a swamp of different UI's. Names of products don't clarify what they do, so you first need to learn to speak AWS, which is akin to using a chain of 5 dictionaries to learn a single language.
- Do the tutorials. Tutorials are poorly written, in that they take you by the hand and make you do stuff you have no idea what you are actually doing (Ow, I just spun up a load balancer? What is that and how does it work?).
- Do more tutorials. Tutorials are badly outdated. Now you have a hold your hands tutorial, leading you through the swamp, but every simple step you bump your knee against an UI element or layout that does not exist in the tutorial. Makes you feel like you wasted your time, and that there is no one at AWS even aware that tutorials may need updating if one design department gets the urge to justify their spending by a redesign.
- Give up and search for recent books or video courses. Anything older than 3-4 years is outdated (either the UI's have changed, deprecated, or new products have been added).
- Receive an email in the middle of the night: You've hit 80% of your free usage plan. Log in. Click around for 20 minutes, until I find the load balancer is still up (weird, could have sworn I spun that entire tutorial down). Kill it, go back to sleep.
- Next night, new email: You've gone 3.24 $ over your free budget. Please pay. 30 minutes later: We've detected unusual activity on your account. 1 hour later: Your account has been de-activited. AWS takes fraud and non-payment very seriously.
Now I need a new phone number/name/address to create a new account. I am always anxious that AWS will charge for something that I don't want, and can't find the UI that shows all running tutorial stuff that I really don't want to pay for. I know the UI is unintuitive, non-consistent, and out-of-sync with the technical - and tutorial writers. And I know that learning AWS, consists of learning where tutorials and books are out-dated, or stumbling around until you find the correct set of sequences in a "3 minutes max." tutorial step.
AWS has grown fat and lazy. The lack of design - and onboarding consistency is typical for a company of that size. Outdated tutorials show a lack of inter-team communication, and seems to indicate that no one at AWS reruns the onboarding tutorials every month, so they can know what their customers are complaining about (or why they, like me, try to shun their mega-presence).
(EDIT: The order of my experiences may be a bit jumbled. Sorry. More constructive feedback: 1) I'd want a safe tutorial environment, with no (perceived) risk of having to pay for dummy services. 2) I want the tutorial writer to have the customer's best interest in mind: "For a smaller site, load balancing may be overkill, and can double your hosting costs for no tangible gains." beats "Hey Mark, we need more awareness and usage on the new load balancer. I need you to write a stand-alone tutorial, and add the load balancer to the sample web page tutorial." 3) Someone responsible for updating the tutorials (even if: "This step is deprecated. Please hold on for a correction") 4) A unified and consistent UI and UX. Scanning, searching, sorting, etc. should work without making me think, I don't want a different UI model for every service. Someone or some team to create the same recipes and boundaries for the different 2-pizza teams, so I don't get a pizza UI with all possible ingredients.)
It seems like the real issue is that you wanted to create an entire business critical infrastructure on top of a technology that you didn’t know.
How was this a good idea? I’m horribly inexperienced with modern web development but I know the rest of the stack pretty well - backend, databases, AWS networking and most of their standard technologies, CI/CD etc. When I was responsible for setting up everything for a green field project, I pulled in someone who was much better than I was for the front end even though I could have muddled my way through. Why would I take the risk?
In less than 2 hours I had auth'd https rest endpoints up and running with logging.
Deploying new endpoints is as easy as exporting a function in my code and typing deploy on the command line. This isn't after some sort of complex configuration, it is after creating a new project via 1 cli command that asks for the project name and not much else!
Google's cloud stuff, especially everything under the Firebase branding, is incredibly easy to use. Getting my serverless functions talking to my DB is almost automatic (couple lines of code).
Everything just works. The docs are wonky in places, but everything just works. The other day I threw in cloud storage, never done cloud storage before, had photo hosting working in about an hour, most of that being front end UI dev time. Everything fully end to end authenticated for editing and non-auth for reads, super easy to set that all up. No confusing service names, no need to glue stuff together, just call the API and tell it to start uploading. (Still need to add a progress indicator and a retry button...)
Everything about Google's cloud services has been like that so far. While I regret going no-sql, I can't fault the services for usability.
And you could do the same thing with lambda/DynamoDB/API Gateway just as easily by using one of the wizards.
What you can do as a hobby project is much different than the parent poster who was trying to deploy an enterprise grade setup with an existing legacy infrastructure. How would you know if GCP is easy based on your limited experience? Not trying to sound harsh, as well as I know AWS, I would be completely loss trying to manage any non AWS infrastructure. Just like I said about the front end in my original response, if I were responsible for setting up a complicated on prem or colo infrastructure from scratch, I would hire someone.
“It’s a poor craftsmen who blames his tools”.
A guy that works with us was also an inexperienced back end developer except with PHP. He was able to easily figure out how to host his front end code with S3 and create lambdas in Node after I sent him a link to a $12 Udemy course. I only had to explain to him how to configure the security groups to connect to our Aurora/MySQL instance.
I can't really explain how easy it is. There are no hidden charges, monthly usage is easy and clear to understand. For small to medium sized apps there isn't even any configuration. I'll be throwing tens of thousands of users, tiny I know, on a service that had 0 configuration done beyond typing its name. In fact I'm 100% sure my VMs on DO are going to give under load first.
To put it another way, there is a healthy industry of people whose sole job is to come in and figure out why AWS is billing too much.
FWIW I showed one of my friends at Amazon how easily I can create and deploy serverless code on Firebase, he admitted it is far easier than what AWS offers.
The downside of this is that options are fewer. If I want a beefier VM my choices are limited, and the way pooling and VM reuse is done is well documented and not at all under my control. It is like cloud on training wheels (TBF to gcp it is possible to opt-in to more complexity for many services, but the serverless function stuff is pretty bare bones on options, arguably as it should be)
But take auth for example. Firebase auth is amazing. Using it is beyond simple, and within the Google ecosystem everything just works so well.
Guess what? Do you really think that there aren’t GCP consultants for any serious development?
Lambda, cognito, api Gateway and DynamoDB is dead simple.
You’re not doing anything complicated. Just because you can set up a little hobby project doesn’t mean it would be any simpler for a real enterprise app.
The number of users as long as the Serverless offerings from cloud providers has everything you need isn’t complicated based on the number of users. All Serverless offerings are optimized for this.
There are also WordPress consultants, does that mean that Wordpress is complicated or that there are people without the capacity (time not intelligence) to learn it.
You don’t have to “explain” how easy it is. The Node tutorial I use to learn it used Firebase.
“Millions of people use AWS” you say in a thread where people are complaining about AWS’s poor usability linked to a comment thread on another site where even more people are complaining about AWS’s usability.
The biggest and best rebuttal against your comment is the mere existence of every other comment in both of these threads.
Yes because an HN thread with 236 comments including people who know what they are doing is representative of anything.
Would it also be proof that React is an unusable framework just because I haven’t taken time to learn it even though millions of people use it everyday?
You can find “rebuttals” about the safety of vaccines on the Internet. Does that mean anything?
Give me physical opt-out. A "robots.txt"/do-not-track for computer surveillance spiders. Let me wear a necklace or QR code on my shoulder, and any commercial face tracking software, is required to create a big black blindspot where my facial micro-expressions used to be.
The implication that everyone is forced to do something, forever, in order to not have their privacy violated, is a little silly. Explicit opt-ins should be the only legal method.
I think this tech is too complex to run on mobile devices in 8 seconds for video transfer from 1 selfie.
What I think Zao does is preprocess the videos (manually or highly accurate facepoint detection). They pre-calculate the transforms (standard facemorphing algorithms with opacity/alpha tweaks), and shading depending on the scene lighting. Then they just need a good frontal selfie, or do some frontalization, and keypoint detection, and the rest can be rendered/computed without much resources, following a pre-defined script.
Yeah, I think the big question is if Zao only allows pre-selected "scenes" that they have already done the processing on or if they allow you to upload any video.
From the results, I think you are exactly right in how they are accomplishing those videos.
It is very meta to ask this question, and to see the replies to this question, as the trope: "The US meddles in foreign revolutions" is common for (social) media propaganda (bots).
> Claim: The US is supporting and encouraging Hong Kong protests.
> Verdict: Conspiracy theory without evidence.
> For years, pro-Kremlin media has used the narrative about anti-government protests being funded by the US. Examples include colour revolutions in post-soviet states, the “Arab Spring” revolts, and Euromaidan in 2014.
> The Hong Kong protests began in June 2019 because of a controversial extradition law that would allow for the transfer of suspects to face trial on the Chinese mainland.
Even if the US is involved in the Hong Kong protests, at least it'd be making good on the typical mission of spreading freedom and democracy for reasons other than "y'all got oil and we want it" (unless Hong Kong's been sitting on a massive petroleum deposit all this time, but that'd be news to me).
It's a matter of probability. They are so often involved [1] that they get blamed even when they appear not to be involved. That's especially true when people know that they have incentives for being involved, regardless of whether or not they are actually involved.
Stating "The US is funding the Hong Kong protests" may very well be found true later on, but right now, it is a conspiracy theory without any evidence, not a matter of probability based on arbitrary priors. This conspiracy theory is actively used in online propaganda with an aim to erode trust in the US, playing on plausible blame and prejudice. It is a distraction tactic, where two wrongs somehow make a right, or make us feel better about the dangerous road taken, because we conclude that nowhere is safe.
Really no better than: "Let's discuss: Employee China stole something from the communal fridge." "Sure, but what about Employee US? I judged him stealing last year. I assume it very probable that Employee US is stealing from Employee China right now. Maybe that's why Employee China was so hungry, he was forced to steal, because Employee US started it. Maybe Employee China did not even steal anything, just took the blame for an unredeemable thief. Let's discuss and pontificate about that hypothetical instead!"
This wasn't the claim, so please don't quote it as it were. That's not a quotation from anyone here.
> later on
That's not how induction works. What we're doing is making a prediction. When we observe that something happens with a certain frequency, we judge that the probability of it occuring in the future is proportional to that frequency. I'm not saying the US is involved in HK. I'm saying it's justified to conclude that the US probably is involved in HK (and to be explicit: this is not equivalent to saying that the US did or did not cause HK.) People do not wait for an object to fall before they make their prediction and that's because we have sufficient historical data as well as explanatory theories to support our expecations.
> without any evidence
The evience is the history of the US's behavior and their current incentives to do so.
> arbitrary priors
The priors are not arbitrary. They are consistent and theoretically accounted for by several branches of IR theory.
> conspiracy theory
Geopolitical neorealism, for example, is not a conspiracy theory, it's one of the leading schools of thought in IR theory at the moment.
> propaganda with an aim to erode trust in the US
What "erodes trust" in the US is the US's behavior, not pointing out facts about it.
> stealing last year.
We're not talking about one incident. We're talking about an extensively documented history amounting to a consistent pattern of behavior which is trivially explainable using mainstream IR theory.
Nobody is denying what China does. To point out additional facts is not to contradict any other facts.
It is a restate of the conspiratorial claim in my first post. One is justified to say anything one pleases, but it could still be a detraction or pointless speculation: "News flash: Alice Zhang has long hair. Women frequently have long hair. But, Bobby Joe is a surfer dude, and surfer dudes like long hair and frequently have long hair too. I haven't seen Bobby yet, or saw a photo of him, but I relevantly pose that I am justified in saying -- using my a priori knowledge of surfer dudes -- that Bobby, a man, likely has long hair too. My evidence is that Bobby, being a surfer dude, has an incentive to like long hair. I am not contradicting that Alice does not have long hair, just complementing the discussion with extensively documented history of surfer dudes and the likelihood of Bobby's hair length."
One is not justified in belieiving anything one pleases. There are things which are justified and things which are not and a coherent epistemology distinguishes between the two.
Can you describe your coherent epistemology? The one that leads you to believe a throughly context-free, slapdash list of low-quality wiki text spanning hundreds of topics over hundreds of years counts as a citable piece of evidence for... anything really?
It doesn't. It's just a summary, akin to a comment.
> evidence for
It wasn't intended to be evidnence for anything but a reminder of the pattern of behavior that the US is throughoughly documented (elsewhere) to have engaged in over the years.
It would be a copout. Instead of actually tackling AI's problem of common sense, claim that maybe layers of logistic regression and matrix factorization is all there is, we are its equal, just a few layers up in abstraction and evolution. Does one really stem and count tokens to decide if a movie review is negative sentiment? Or does one empathize with its writer and build a complete model inside your head?
The horse would be the AI researcher claiming reasoning and understanding from an activation vector trained on word co-occurence on Wikipedia, and the farmer giving clues, is the heated community and industry, mistaking impressive dataset performance for a solution for a problem they're starting to forget.
I remember using some kind of software for math problem sets in high school. Some of the kids would just look at the equation, get it wrong, see the answer, and try and figure out the answer to a new version of the problem with generated coefficients. That sounds very much like a Clever Hans solution done by a human. I think what AI is lacking is the mechanism which causes us to reject such a solution, and that's much more complex than just finding the answer and I'm not sure related to the ability to find solutions in the first place.
For example a problem might be solving for the roots of a polynomial, and on each try it would randomly generate a new polynomial with new coefficients.
Ah, so what you're saying is that they would give up on the problem once they saw the answer, and then move on to a new one, rather than work through the first one until they understood the answer, right?
Sort of. The software would give you several attempts in order to get points on the problem. So they didn't really give up so much as they never had any intent to solve the problem in the first place, so much as see whether there was some kind of obvious relationship between randomly generated coefficients and the answer in order to get points on the question.
Ah, I see, sort of gaming the test platform (or trying to) rather than actually understanding the math. So a case of "you get what you measure" and also an example of what happens when you force kids to learn something they have no interest in, perhaps?
I'm of the understanding that this was not a single image, but a composite image, taking by different satellites, planes, and/or drones.
The US could have build test facilities in their deserts, have a 3-D model available for proper reconstruction, and then learn to stitch and skew back all imagery into a single composite image. There may even be some "filling in" or "sharpening" of pixels or textures that could not be observed, but are guessed from their context.
In the framework of composite imagery, it would indeed be possible to zoom in, until you get to camera's capturing road traffic (maybe the license plate was not observed in the moment the main photo was taken, but was remembered from an observation by traffic camera 30 minutes ago and stitched back onto the object: composite imagery through time).
Finally, you could use multiple non-image sources for the composition. If three (ground) sensors capture the noise, heat, or vibrations from a train on a train track, you can now triangulate and draw the location of that train on space photos at a timestamp of your choosing.
> the license plate was not observed in the moment the main photo was taken
I believe there's a limit on resolution of a space satellite. If you're suggesting the traffic cam reads the plate, how are you going to connect the coloured blob that is the car with an image taken by a traffic cam at a different time in a country that doesn't give you access to its traffic cams?
Because it is common to reconstruct a signal by taking multiple measurements, instead of a single sample. The field of compressed sensing broke ground on effective sampling. If (some) error is random noise, then you can remove this by majority vote. It is ineffective not to re-use that high-resolution secret drone fly-over footage, when composing satellite imagery at a later date. Inpainting, upscaling, de-oldifying, automatic coloring, 3D modeling, composition (see black hole photo process) etc. have become common usage in the ML community, and so I have reason to assume these techniques are also used to enhance and improve the resolution and unobserved guesstimates of satellite imagery.
> how are you going to connect the coloured blob that is the car with an image taken by a traffic cam at a different time in a country that doesn't give you access to its traffic cams?
Didn't the NSA track mobiles in foreign countries by installing similar beacons / hacking sensors in bigger cities? That would theoretically allow them to view through the car roof and "see" who is in the backseat.
> The spy agency is said to be tracking the movements of “at least hundreds of millions of devices” in what amounts to a staggeringly powerful surveillance tool. It means the NSA can, through mobile phones, track individuals anywhere they travel – including into private homes – or retrace previously traveled journeys.
> The NSA provided some input into the report, with one senior collection manager, granted permission to speak to the newspaper, admitting the agency is “getting vast volumes” of location data from around the planet by tapping into cables that connect mobile networks globally.
> According to the Post, the NSA is applying sophisticated mathematical techniques to map cell phone owners’ relationships, overlapping their patterns of movement with thousands or millions of other users who cross their paths.
Maybe mixed terminology, to me a composite pic is one where sections or entireties of several pics have been arranged to make a larger or more detailed one. What you desrcibe is different, but ISWYM.
The author of James Bond, Ian Flemming, worked as a commander for British Naval Intelligence, an officer for the 30 Assault Unit, whose task it was to gather intelligence behind enemy lines. His brother worked with "stay-behind" freedom fighter networks. :)
Human brains are optimized for prediction of future events, because this helps with survival (eg: you can predict a winter coming up, so you stock up on food).
Randomness is by (some) definition unpredictable. But humans are so eager for pattern recognition that they will see, or expect, patterns that just are not there.
"Pareidolia is the tendency to interpret a vague stimulus as something known to the observer, such as seeing shapes in clouds, seeing faces in inanimate objects or abstract patterns, or hearing hidden messages in music." and also: https://en.wikipedia.org/wiki/Apophenia#Causes
On a similar note, humans are terrible at coming up with random/unpredictable sequences. If you ask a group of test subjects to pick a random number between 1 and 10, you get a huge edge when you guess 3 or 7.
> Statistical Modeling: The Two Cultures (2001), Breiman
> There are two cultures in the use of statistical modeling to reach conclusions from data. One assumes that the data are generated by a given stochastic data model. The other uses algorithmic models and treats the data mechanism as unknown. The statistical community has been committed to the almost exclusive use of data models. This commitment has led to irrelevant theory, questionable conclusions, and has kept statisticians from working on a large range of interesting current problems. Algorithmic modeling, both in theory and practice, has developed rapidly in fields outside statistics. It can be used both on large complex data sets and as a more accurate and informative alternative to data modeling on smaller data sets. If our goal as a field is to use data to solve problems, then we need to move away from exclusive dependence on data models and adopt a more diverse set of tools.
You gather the data required to make a good probability prediction for voter preference ((soft) labels for this easier to find than swing voter labels). Then when the model is uncertain, those are your swing voters / on the fence voters.
> Postcode? Age? Race? Gender? Income?
When it is found to be cost-effective: All and everything that is allowed by law and then some. In its pitch deck, Facebook boasted about its advertisers being able to target and identify: university, degree, concentration, course history, class year, housing/dormitory, age, gender, sexual orientation, zip (home and university/work), relationship status, dating interests, personal interests, club membership, jobs, political bent, friend graph, site usage/addiction level.
Likes make this very easy (with a little luck, you can deduce all of zip, age, race, gender, income from a list of Likes).
> What is it about CA's methods that were so effective?
Hillary Clinton: “The real question is how did the Russians know how to target their messages so precisely to undecided voters in Wisconsin or Michigan or Pennsylvania – that is really the nub of the question. So if they were getting advice from say Cambridge Analytica, or someone else, about ‘OK here are the 12 voters in this town in Wisconsin – that’s whose Facebook pages you need to be on to send these messages’ that indeed would be very disturbing.”
FBI: Using those techniques in June 2016, “the GRU compromised the computer network of the Illinois State Board of Elections by exploiting a vulnerability in the SBOE's website,” the report said. “The GRU then gained access to a database containing information on millions of registered Illinois voters, and extracted data related to thousands of U.S. voters before the malicious activity was identified. Similarly, in November 2016, the GRU sent spearphishing emails to over 120 email accounts used by Florida county officials responsible for administering the 2016 U.S. election,” the report said. “The spearphishing emails contained an attached Word document coded with malicious software (commonly referred to as a Trojan) that permitted the GRU to access the infected computer.”
> After all someone had to do something similar for Obama.
Obama's digital campaign was very successful, but the above seems to indicate that Kushner's campaign was way more aggressive and less scrupulous (and may have had connections with - or help from foreign adversaries).
It may also be that propaganda and smears works better depending on your political preference and level of education and neurosis: Even if Hillary had spent the same amount of money and energy (some reports indicate that Hillary's digital campaign was a waste of money and displayed poor management), efficiently, it may be easier to sway a voter to vote Republican, if you can target their fears of immigrants, religious beliefs, distrust in gun regulation from the government, and conspiracy theories. Surely, the many wolf cries about fake news, and retweeting of conspiracy theories, has set up the Trump base for easier manipulation (you can simply create a meme to counter a story in a respected journal or keep them guessing on the alternative truth of it).
How successful was Obama's digital campaign? From what sources are we deriving that conclusion?
Two countervailing arguments:
First, the narrative about Obama's digital success is itself extraordinarily powerful and was used throughout the marketing industry to sell marketing services and products to commercial organizations; many of the obvious Google searches about Obama's campaign effectiveness will turn up a first SERP filled mostly with appeals to social media programs.
> How successful was Obama's digital campaign? From what sources are we deriving that conclusion?
I'd agree that it may have been overblown (just like the Russian interference may have been overblown). Also, of course the marketeers ran with it and turned it into a sales pitch.
But that detracts just a little from the effectiveness of Obama's digital campaign. As it was the first of its kind, relative to other campaigns that lacked a modern digital strategy, it gave a significant edge. Your argument seems of the form: "Hercules is strong. Some say he is really really strong. Ergo, Hercules was not strong".
2008: > The key technological innovation that brought Barack Obama to the White House wasn’t his tweets or a smartphone app. It was the Obama campaign’s novel integration of e-mail, cell phones, and websites. The young, technology-savvy staffers didn’t just use the web to convey the candidate’s message; they also enabled supporters to connect and self-organize, pioneering the ways grassroots movements would adapt and adopt platforms in the campaign cycles to come.
> but a network of supporters who used a distributed model of phone banking to organize and get out the vote, helped raise a record-breaking $600 million, and created all manner of media clips that were viewed millions of times. It was an online movement that begot offline behavior, including producing youth voter turnout that may have supplied the margin of victory.
> All of the Obama supporters who traded their personal information for a ticket to a rally or an e-mail alert about the vice presidential choice, or opted in on Facebook or MyBarackObama can now be mass e-mailed at a cost of close to zero.
2012: > Once again, the Obama campaign built a dream team of nerds to create the software that drove many aspects of the campaign. From messaging to fund-raising to canvassing to organizing to targeting resources to key districts and media buys, the reelection effort took the political application of data science to unprecedented heights. The Obama team created sophisticated analytic models that personalized social and e-mail messaging using data generated by social-media activity.
> The Republican side, too, tried to create smarter tools, but it botched them. The Romney campaign’s “Orca,” a platform for marshaling volunteers to get out the vote on election day, suffered severe technical problems, becoming a cautionary tale of how not to manage a large IT project. For the moment, the technology gap between Democrats and Republicans remained wide.
Neither of these sources cite any social science up to back their conclusions. I guess I'm interested in the fact that David Carr believed Obama's digital campaign was important, because I sort of generally liked David Carr. But this is color commentary, not analysis.
It is difficult to provide a counterfactual here (would Obama have won if his campaign hadn't put any effort in digital?), so I am not sure if you are requiring that.
For factual analysis of the effects and strategies employed by Obama (on a casual glance, most of which support the statement that Obama's campaign was highly successful), do a search on Google Scholar. Here are a few highly cited political science sources I was able to pull (need to get back to work now).
> Digital media in the Obama campaigns of 2008 and 2012: Adaptation to the personalized political communication environment
> This essay provides a descriptive interpretation of the role of digital media in the campaigns of Barack Obama in 2008 and 2012 with a focus on two themes: personalized political communication and the commodification of digital media as tools. The essay covers campaign finance strategy, voter mobilization on the ground, innovation in social media, and data analytics, and why the Obama organizations were more innovative than those of his opponents. The essay provides a point of contrast for the other articles in this special issue, which describe sometimes quite different campaign practices in recent elections across Europe.
> From Networked Nominee to Networked Nation: Examining the Impact of Web 2.0 and Social Media on Political Participation and Civic Engagement in the 2008 Obama Campaign
> This article explores the uses of Web 2.0 and social media by the 2008 Obama presidential campaign and asks three primary questions: (1) What techniques allowed the Obama campaign to translate online activity to on-the-ground activism? (2) What sociotechnical factors enabled the Obama campaign to generate so many campaign contributions? (3) Did the Obama campaign facilitate the development of an ongoing social movement that will influence his administration and governance? Qualitative data were collected from social media tools used by the Obama ‘08 campaign (e.g., Obama ‘08 Web site, Twitter, Facebook, MySpace, e-mails, iPhone application, and the Change.gov site created by the Obama-Biden Transition Team) and public information. The authors find that the Obama ‘08 campaign created a nationwide virtual organization that motivated 3.1 million individual contributors and mobilized a grassroots movement of more than 5 million volunteers. Clearly, the Obama campaign utilized these tools to go beyond educating the public and raising money to mobilizing the ground game, enhancing political participation, and getting out the vote. The use of these tools also raised significant national security and privacy considerations. Finally, the Obama-Biden transition and administration utilized many of the same strategies in their attempt to transform political participation and civic engagement.
> The Internet's Role in Campaign 2008
> A majority of American adults went online in 2008 to keep informed about political developments and to get involved with the election.
Additional context for the last paragraph - Hillary's campaign did spend a lot more money on analytics and advertising, and a lot more energy (60 in-house mathematicians and analysts).
So you predict if they have a general mild preference for the Democrats, then mine if they once reacted strongly on certain triggers: gun control, weak leadership, immigrants, drug addiction, patriotism, racism, elitism, religion & conspiracy. Then you personalize the message for them: Hillary will take away all guns, Hillary is ill and frail, The Democrats let copkillers enter the USA, China ships fentanyl to American youth, and wants to steal your steel workers' money, look at these violent BLM protesters and one snippet of Clinton talking about superpredators, there is a deep state which let Obama play pingpong in the basement of a pizza place, we have a non-crooked Christian Vice President and Hillary smells of sulfur.
The opposite of extreme self-citing is self-plagiarism (either out of ignorance, to avoid extreme self-citing on ground-breaking research, or with malicious intent: passing the same paper to multiple journals as a new result).
> The rate of duplication in the rest of the biomedical literature has been estimated to be between 10% to 20% (Jefferson, 1998), though one review of the literature suggests the more conservative figure of approximately 10% (Steneck, 2000). https://ori.hhs.gov/plagiarism-13
If work by another author was enough to inspire you and add a reference, then your own previous work should certainly qualify, if it added inspiration to the current paper. Self-citing provides a "paper trail" for the reader when they want to investigate a claim or proof further.
(Like PageRank, it is very possible to discount internal PR/links under external links, and when you also take into account the authority of the referencer, you avoid scientists accumulating references from non-peer reviewed Arxiv publications).
- Create account. Enter credit card details, but verification SMS never shows up. Ask for help.
- I get called at night (I'm abroad) by an American service employee, we do verification over the phone.
- Try to get the hang of things myself. Lost in a swamp of different UI's. Names of products don't clarify what they do, so you first need to learn to speak AWS, which is akin to using a chain of 5 dictionaries to learn a single language.
- Do the tutorials. Tutorials are poorly written, in that they take you by the hand and make you do stuff you have no idea what you are actually doing (Ow, I just spun up a load balancer? What is that and how does it work?).
- Do more tutorials. Tutorials are badly outdated. Now you have a hold your hands tutorial, leading you through the swamp, but every simple step you bump your knee against an UI element or layout that does not exist in the tutorial. Makes you feel like you wasted your time, and that there is no one at AWS even aware that tutorials may need updating if one design department gets the urge to justify their spending by a redesign.
- Give up and search for recent books or video courses. Anything older than 3-4 years is outdated (either the UI's have changed, deprecated, or new products have been added).
- Receive an email in the middle of the night: You've hit 80% of your free usage plan. Log in. Click around for 20 minutes, until I find the load balancer is still up (weird, could have sworn I spun that entire tutorial down). Kill it, go back to sleep.
- Next night, new email: You've gone 3.24 $ over your free budget. Please pay. 30 minutes later: We've detected unusual activity on your account. 1 hour later: Your account has been de-activited. AWS takes fraud and non-payment very seriously.
Now I need a new phone number/name/address to create a new account. I am always anxious that AWS will charge for something that I don't want, and can't find the UI that shows all running tutorial stuff that I really don't want to pay for. I know the UI is unintuitive, non-consistent, and out-of-sync with the technical - and tutorial writers. And I know that learning AWS, consists of learning where tutorials and books are out-dated, or stumbling around until you find the correct set of sequences in a "3 minutes max." tutorial step.
AWS has grown fat and lazy. The lack of design - and onboarding consistency is typical for a company of that size. Outdated tutorials show a lack of inter-team communication, and seems to indicate that no one at AWS reruns the onboarding tutorials every month, so they can know what their customers are complaining about (or why they, like me, try to shun their mega-presence).
(EDIT: The order of my experiences may be a bit jumbled. Sorry. More constructive feedback: 1) I'd want a safe tutorial environment, with no (perceived) risk of having to pay for dummy services. 2) I want the tutorial writer to have the customer's best interest in mind: "For a smaller site, load balancing may be overkill, and can double your hosting costs for no tangible gains." beats "Hey Mark, we need more awareness and usage on the new load balancer. I need you to write a stand-alone tutorial, and add the load balancer to the sample web page tutorial." 3) Someone responsible for updating the tutorials (even if: "This step is deprecated. Please hold on for a correction") 4) A unified and consistent UI and UX. Scanning, searching, sorting, etc. should work without making me think, I don't want a different UI model for every service. Someone or some team to create the same recipes and boundaries for the different 2-pizza teams, so I don't get a pizza UI with all possible ingredients.)