"Apps like Instagram have a standard carousel for photo albums"
Yeah, but that only applies to users who are used to Instagram. It's definitely not an INTUITIVE feature.
I've had friends and family link me to Instagram posts, and though I've learned now, for a long time they'd have to explicitly mention there were additional pictures in the post, or I'd miss them.
(I don't know if it's different in the app, but certainly in the webpage those little dots at the bottom are non-obvious to a first-time user.)
I think it's more of a mobile-first design. It would be super weird to see it and do it with a mouse, but it's a common enough ui on mobile apps that I think a lot of people will be accustomed to it. But Instagram also has a feature where the second time you see a post that you haven't scrolled through, you see the second photo. This might server as a discoverability aid.
It is 153% the case that mobile user experience is almost entirely happened during the epoch when helping your user understand how to use the app rates second, and keeping some hidden features for expert users rates highly (the feeling of being a power user since you know some features others don't - making the user feel a little bit loyal to your app).
So if it's not perfectly discoverable, that's probable something the designers are happy with.
True but that's just the first time experience, once they understand it, then user's will trust that the scrolling function will work on subsequent sessions.
With websites, every implementation is slightly different... so user's get that confusing "first time experience" every single new website they visit...
Object orientation is not really about making use of structs - many (most?) functional languages also use structs; it's just a grouping of related data.
Object-orientation is about grouping together functionality and data. I.e., an object consists of both a struct, and the functions that act on that struct, known as methods.
In OO, your type additionally refers to the functions called on the data, not just on the data. E.g. you might have a car and a boat and they both only have a position and velocity as data, but the boat has a "sink" method the car doesn't.
For games programming in particular, there is a paradigm known as ECS or Entity Component System. (Depending on your view you might call this a sub-paradigm of object orientation, but I think it's much more accurately described as an alternative.) As the Wikipedia article states:
> An entity only consists of an ID and a container of components. The idea is to have no game methods embedded in the entity.
EDIT: There's also the very interesting paradigm that e.g. Julia uses known as "Multiple Dispatch". This is kind of like defining methods on objects, except that the method is defined on a collection of objects instead of a single one.
E.g., in traditional OO, you might have a vehicle#crash method. And it might take another vehicle as argument, e.g. car.crash(truck). But in multiple dispatch you define a function crash that takes in two vehicles, and then depending on the type of the vehicles given, it changes its behaviour so that crash(car, truck) is different from crash(car, car).
In a sense, the function is not thought of as belonging to either the car or the truck, but as belonging to the pair of them, so conceptually this is different to OO.
I'm not particularly familiar with the paradigm so I'm sure I'm not doing it justice, but you can read further on Wikipedia and in the Julia docs, or the given video:
Notice that I have to explicitly state "BiFunction". There is also "Function" for single-argument functions, and nothing for more arguments (there are also `Runnable`, `Consumer` and `Producer` for fewer arguments in-or-out). This is because BiFunction isn't actually a function - it's an INTERFACE! Any class that implements 'apply' and 'andThen' functions with the right signatures will satisfy it, and can be passed in. You can make your own class and Java will happily accept it into this method.
Java then just adds some nice syntactic sugar to make stuff look like functions. E.g., if you do want to define an anonymous lambda like
(x,y) -> "" + x + y;
What happens under-the-hood is that Java defines an anonymous BiFunction class. You can assign it to a variable and do everything you would want to do with an object:
I can call bif.toString() and all those other default methods defined on objects in Java. It's really not a function, it's an object holding a function:
and if you were to go and implement your own BiFunction as above (filling in the blanks) - you could pass it around exactly the same places as your "anonymous lambda" and it would work exactly the same way because it IS the same thing.
Like I said, a very object-oriented approach to functionality.
I was confused what they meant by "simulation". But they basically mean like a role-playing game. Like, they acted out how they would respond in that particular situation.
I'm confused how the outputs of the simulation can be considered as anything more than a reflection of the inputs. E.g. the findings include a description of how the public reacts, but that reaction was created by the simulators according to what they thought would happen. Then they they get to write a report saying that "the simulation showed X would happen", even though that's equivalent to "we thought X would happen, so we made it happen in the simulation".
I guess there's value in setting up a situation where people actually consider what would happen and write down what they come up with. At least that can point out problems of the "obvious once you think about it" variety.
I wasn't only thinking of computer simulations. The word "simulation" is very vague.
It could potentially mean one of:
- Computer Simulation
- Pen & Paper Mathematical models
- Pen & Paper narrative modelling (i.e., writing a paper about different situations and the possible contingency plans - though granted this is not usually referred to as "simulation")
- Role-playing a situation via speech
- Acting out a situation with props and physical movement / simulated limited communication
- Sending out simulated broadcasts ("this is only a test")
- Sending out false but believable broadcasts
- Infecting the public with a (hopefully less harmful) disease in order to gauge response.
> Discussions, debates (some rather heated), and decisions focused on the public health response, lack of an adequate supply of smallpox vaccine, ...
They seem to have missed access to testing, forget vaccines, which is the big thing missing today, at least in the US in the first month or so + many other countries. Not to mention following the limitations of the WHO, CDC, et al guidelines in a scenario with plenty of unknowns instead of one we know well like Smallpox (which caused the first epidemics in the Americas, Haiti to be specific, in 1507 with the arrival of Christopher Columbus - the opposite of something novel).
But I guess this bioware scenario is relying heavily on the idea that US intelligence community and (public/private) health care systems will find out exactly what it was rather quickly and already have established testing and vaccines. Which to be honest would be nice to have right now and makes more sense for a biowarfare attack than the global epidemic we currently have.
Edit: nice find by the way, it puts the project in better perspective in regards to scale and participants. From the stuff I've read/watched it seems these "war game" scenarios have become quite frequent in many areas of the US and federally (with plenty thanks to spending on counter-terrorism in places like NYC). Much like the endless NATO wargames since the cold war started and more recently with Russia. The politics and bureaucracy, especially early on, seems inescapable and hard to reliably war game IMO. Especially for something without a clear "adversary" or tactical weapon.
A better name for "the speed of light" might actually be "the speed of time" or "the speed of information transfer".
It's a mathematical limit and you can derive it using pretty much anything, not just light.
As an object goes faster, it experiences time dilation, length contraction, etc. The "Speed of Light" C is basically the point at which all of these things reach either zero or infinity.
It also "just so happens" that electromagnetic waves (such as light) travel at exactly this speed in a vacuum. They travel that fast because they travel at the maximum possible speed, and that happens to be the maximum possible speed.
It's like if you always traveled at the speed limit because you didn't want to break the law, and we called it "triplesex_ speed", but really it's just the maximum speed that anyone would go if they didn't want to break the law.
Except in this case it's not just a law- it is (to the best of our knowledge) physically impossible to go faster. It's not just that we haven't seen anything go that fast- it's that going faster than that doesn't even really make sense theoretically - for example, it would result in time-travel (this has to do with the fact that the order in which events happen and the speed at which time passes is different to different observers depending on the speed at which you're traveling).
- "severity" as level of importance the tester assigned it when they found the bug and
- "priority" as level of importance a developer assigned to it after triage.
That first piece of information is important when you need to determine which bugs to look at first for triage, but how important is it after that point? Can it not just be replaced after the developer's judgement?
In other words, could you not have a single priority field? The tester uses a heuristic to assign an initial priority (e.g., crashes are P0, cosmetic are P4). The dev uses this to prioritize which bugs to triage first, and once they've determined a new priority based on customer experience combined with app behaviour, they replace the old one.
If you really need to go back and check what the tester assigned, then I assume you can just use the "history" or "revision" feature in your bug tracking app.
Additionally, as suggested in a different comment, you can add a label for the bug's type if you feel that's important (crashing, lagging, cosmetic, etc.).
Perhaps the message here is that the app's behaviour in a vacuum is not the sole determinant of its priority. But then that should be the message, rather than claiming there is another metric which needs to be separately tracked when evaluating bugs.
I think the formula should be - severity * (how widespread is this) - ease of workaround = priority. So if any of those measures change (e.g. an easy workaround is discovered, it's determined that the bio page that is crashing is also almost never viewed) then the priority should be adjusted. Having just severity without a measure of 'how many people does this impact?' and 'just how badly does this impact them?' seems like it's missing part of the picture.
But I think it should be similar to risk calculation, multiply by the "impact" or "damage". A typo might offend a small number of people but have a big publicity impact
> when you need to determine which bugs to look at first for triage
If you can't at least do a rough triage of all the new bugs in one sitting, you're either not allocating enough time for triage, or letting yourself get way too bogged down with ancillary conversation during the triage meeting, or it's just time to scrap the product and go home.
If you can stay on top of your triage, then there's not really any need to worry about what order you do them in.
Honestly, I've found that testers are poor judges of priority. They get emotionally involved in their bug and don't really make a good judgement of the priority to the business.
That's why the split between priority and severity exists: The tester is telling you the subjective view of the bug as if a user encountered it; and the triage process is all about judging the priority of fixing the bug in this particular release cycle.
Testers are not supposed to judge priority, that's a business decision (that could be based on technical know-how) and as such should be decided by business people aka PMs
The previous commenter was not saying that machine learning didn't exist 10 years ago. They were saying that "many of the TOP CONTRIBUTORS TO THAT FIELD were not doing machine learning research 10 years ago."
There are also people who were doing machine learning 30 years ago and are still doing so. But there are people who were doing, say, fluid dynamics simulations 10 years ago who self-taught machine learning, and are now making significant contributions to machine learning, by transferring their knowledge of programming, optimization, calculus, etc.
Yeah, but that only applies to users who are used to Instagram. It's definitely not an INTUITIVE feature.
I've had friends and family link me to Instagram posts, and though I've learned now, for a long time they'd have to explicitly mention there were additional pictures in the post, or I'd miss them.
(I don't know if it's different in the app, but certainly in the webpage those little dots at the bottom are non-obvious to a first-time user.)