1) In its earliest days, API stood for "Application Program Interface"; now, mostly "Application Programming Interface". We might build and test APIs far better if we think Application *Programmers'* Interfaces. Programs alone never use APIs; people writing and using programs do.
2) It might be easy to think that programs use APIs, or that programs call APIs. But that's like thinking that drill bits use chucks, or that lamps use switches, sockets, and extension cords. *People* use drills and lamps—and their elements—as parts of integrated systems.
3) APIs—like everything else that gets built—are built from the perspective of an insider. That’s inevitable; the act of building something automatically puts you inside the builder’s perspective. Escaping that perspective is essentially impossible, until you forget building it.
4) Of course we can form ideas about what outsiders might need or desire, and build something with the intention of making it useful, helpful, powerful, friendly, or reliable to outsiders. But while building, we're unable to drop what we know already; to see it from the outside.
5) It’s a really good idea to review and examine and test our products from the inside as we're building them, and to confirm that we've just built is reasonably close to what we intended to build. If it isn’t, we could reasonably anticipate plenty of surprises and trouble.
6) But if we’re building something for other people to use, we must do more than confirm that it can do what we intended. We must also anticipate and address problems that outsiders will have with it—and how we might be fooled by our insiders' perspective on what we’re building.
7) This applies in spades to APIs, because the user of the API is never a program; that’s only an abstraction. The user is always a person—indirectly, an end user or stakeholder of a product; or more directly, someone using the API as means to help indirect users get stuff done.
8) I've been doing a fair amount of coding recently. Programming something non-trivial always includes some amount of *necessary confusion* as we encounter new problems to be solved, identify and apply technologies to solving them, and develop the solutions—and rinse and repeat.
9) But whenever I'm coding, I'm also experiencing a substantial amount of confusion that I perceive as being unnecessary: in particular, weird (to the outsider) mental models of the product; unhelpful or just plain wrong error messages; non-existent or lousy API documentation.
10) The source of each of these problems is easy to explain. Insiders already have the insiders' models of the product; they know how the product is intended to work, so they under-emphasize and under-imagine mistakes outsiders might make; minimal documentation—or none—suffices.
11) So the insider's perspective for a product tends to be "it works"—which actually only means "it *can* work"; "it appears to meet some requirements to some degree”; "it works for me, on my machine, and with my (insiders') knowledge". That perspective is intrinsic to insiders.
12) The outsiders' encounters with the product are always different from the insiders'. (The outsiders' perspectives are different from each other, too; that’s for later.) An API is an interface to the product's (presumably, typically) hidden internals—as it should be.
13) Checking that API calls return a correct result or a correct error code is reassuring for the builders, that’s okay. Builders should do that. But for testers, checking the output doesn’t really address a much bigger deal, which is that APIs will be used by outsiders.
14) To TEST a product does not simply mean to check its output. To test something is to evaluate it by learning about it through experiencing, exploring, and experimenting. And whose experience of an API is crucial? The experience of an outside programmer who uses it. Test THAT.
15) As with anything else, the real test of a product comes with people’s experience of using the damned thing. So, testers: are you simply checking the output the API your organisation is developing? Or are you trying to evaluate the experience of some contemplated user?
16) To TEST a product and its API means (among other things) to evaluate output from error conditions; not simply to check if you got some anticipated return code. Does the API return a description of what went wrong, such that it helps the outsider trying to troubleshoot?
17) An API I was using recently returned "insufficient rights to perform operation", which led me down a rabbit hole of investigating why admin status wouldn’t permit it. Turns out it was looking for a string instead of a JSON. Tell your user THAT, not something misleading.
18) When I ask another API for an element Web on a page, it returns a data structure that prominently features the string "located:false", which misleads outsiders into believing that the element was not found. Turns out that "located" refers to some aspect internal to the API.
19) When testing that API, "located==false" would be a correct result, based on insider knowledge (or something like it, obtained from exasperating amounts of trying and retrying and visiting Stack Overflow). But "located" means something else entirely to those new to the API.
20) Therefore, when building and testing an API, you *could* ask "Could this call or return value afford a different interpretation to outsiders than to the builders of the product?" But your insider's perspective will bias you towards your existing, established interpretation.
21) Therefore: have someone try to USE the API. Give them ideas for something to build, if they don’t have some already. Step back as they build it, and have them log the problems and confusions they encounter. When they do, avoid explaining confusion away. Prefer fixing the API.
22) When building an API, assume from the outset that other people, outsiders, will be using it. Document it from that perspective, anticipating that they are trying to create something that other people will use. Your examples should be based on that, not on isolated calls.
23) When testing an API, try building something with it. If a function returns an incorrect value, that's a obviously bug. But if a function call is named in a potentially confusing way, that's also a bug. If the documentation doesn’t address that confusion, that's also a bug.
24) If the documentation includes only atomic function calls, and doesn’t include examples of useful multi-step transactions: Severity 1 bug. If the documentation includes only installation instructions: Severity 0 bug. Yes, I'm talking about 85% of the projects on GitHub.
25) Testers: we can provide a super-valuable service to the project by keeping a careful record of our experience with trying to test the API—when our testing includes trying to use it. Every problem you run into is plausibly a problem someone else will run into. That is: a bug.
26) Every successful interaction you have with the API as you’re trying to test it is plausibly another element in examples and stories that could go into the API's tutorial user guide. Try using the API, record your experience, and the project gets documentation work for free!
27) Every bit of confusion you experience as you develop code that exercises the API is plausibly a bug in the interface. Every time that confusion is resolved poorly, slowly, or not at all by the documentation, that’s plausibly a bug in the interface or its documentation.
28) That, by the way, is an instance of an important testing heuristic: *Every testability problem is plausibly a usability problem; and every usability problem is plausibly a testability problem.* Something that is hard to test is often a pain to use, and vice versa. Report it.
29) I've heard people say, "Avoid testing from the user interface; wherever possible, test from the API instead." This is generally bad advice on several levels, not least because it obscures the fact that the API IS a user interface for one user—a programmer—to help other users.
30) The API is also a user interface whereby testers can get access to aspects of the product that might be much harder to test otherwise. So, for an API, consider not only end users, and not only programmers, but also testers as its users. Treat the API itself as a product.
31) Model the API and test through it in terms of structural elements, including its documentation; functions and features it offers and affords; data that it accepts, rejects, stores, retrieves, processes, puts out, deletes. Consider what it does and doesn’t provide access to.
32) Consider languages and libraries that the product AND the API might depend upon—and the programs that depend on the product and the API. Consider how people will encounter and use it, and problems they will have using it—and consider reasonably foreseeable misuse and abuse.
33) Consider how time affects the product. How’s the response time? What happens if things are requested out of order? All at once? Do strings of transactions or interrupted or disrupted transactions time out? Do session tokens expire?
34) Examine the API in terms of quality criteria you apply to the rest of the product—remembering that code itself won't experience or notice problems in the product. But people using code and people writing code will. If there's a product and an interface, there's always a user.
35) For a guide to testing through an API, I offer developsense.com/blog/2018/07/e… (four parts).

Rapid Software Testing Explored for Europe runs May 25-28. Register here: rapid-software-testing.com/attending-rst/, RSTE for the America will happen June 21; registration links coming soon.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Michael Bolton

Michael Bolton Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @michaelbolton

9 Mar
1) Documentation people and testers are like admins and secretaries. Companies came to think of them as an extravagance, and got rid of them in the name of "efficiency". But this is a completely bogus efficiency, because the cost of NOT having them is largely invisible.
2) Everybody is now swamped with doing the work that support people used to do, but that's invisible even to the people who are now performing that work. It's "just the way things (don't) get done around here". I notice this as I'm programming; most API documentation *sucks*.
3) Of course, when I'm programming (even for myself), my job is to make trouble go away; to get the problem solved. When something gets in the way of that, I'm disinclined to talk about it. "I'm a problem-solver!" So I'll buckle down and push on through. Gotta get it done!
Read 26 tweets
8 Mar
1) Heuristic: When X is a noun: "X testing" is "testing focused on X-related risk".

Heuristic: When Y is an adjective or adverb, "Y testing" is "testing in a Y-ish way".

Heuristic: X testing can be done in ways modified or not by Y; and Y testing may be focused on X or not.
2) So: "Performance testing" means "testing focused on risk related to performance". "Usability testing" means "test focused on usability-related risk". "Function testing": "testing focused on risk related to functions". "Unit testing": testing focused on problems in the units.
3) Now: let's look at "regression testing". Regression testing means "testing focused on risk related to regression". (Regression means "going backwards", presumably getting worse in some sense.") *Repetitive* is an adjective, modifying something; not really something in itself.
Read 7 tweets
26 Feb
No one ever sits in front of a computer and accidentally compiles a working program, so people know (intuitively, correctly) that programming must be hard. Almost anyone can sit in front of a computer and stumble over bugs, so they believe (incorrectly) that testing must be easy!
There is a myth that if everyone is of good will and tries really, really hard, then everything will turn out all right, and we don't need to look for deep, hidden, rare, subtle, intermittent, emergent problems. That is, to put it mildly, a very optimistic approach to risk.
The trouble is that to produce a novel, complex, product, you need an enormous amount of optimism; a can-do attitude. But (@FionaCCharles quoting Tom DeMarco here, IIRC), in a can-do environment, risk management is criminalized. I'd go further: risk acknowledgement is too.
Read 27 tweets
1 Feb
1) Why have testers? Because in some contexts, in order to see certain problems, we need a perspective different from that of the builders—but we also need a perspective different from that of users. First, some people are neither builders nor users. Some are ops or support folk.
2) Others are trainers or documentors. Some people are affected by users, or manage users, but are not themselves users. Some are users, but forgotten users.

Another reason it might be important to have testers: everyone mentioned so far is focused mostly on *success*.
3) A key attribute of the skilled and dedicated tester is a focus on risk, problems, bugs, and the possibility of failure. Most builders can do that to a significant and valuable degree, but the mental gear shifting isn’t automatic; it requires skillful use of the mental clutch.
Read 22 tweets
10 Sep 20
1) Since it's Friday, OF COURSE the big little idea arrives unbidden, to be consigned to weekend Twitter. However... several unrelated conversations are leading to some epiphanies that help to explain A LOT of phenomena. For instance, testing's automation obsession. Game on.
2) There are problems in the world. People don't like to talk about problems too much. All kinds of social forces contribute to that reluctance. "Don't come to me with problems! Come to me with solutions!" Dude, if I had a solution, I wouldn't come near your office. Trust me.
3) Here's the thing (and I'm painting in VERY broad strokes here) : builders, or makers, or (tip of the hat to @GeePawHill) practitioners of geekery are trying to solve technical or logistical problems. Consumers, or managers, or some testers, are trying to solve social problems.
Read 88 tweets
2 Sep 20
1) Why do we test? A primary reason is to discover problems in our products before it's too late.

A problem is a difference between the world as experienced and the world as desired. It's a problem if someone experiences loss, harm, pain, bad feelings, or diminished value.
2) The degree to which something about a software product is perceived a problem is the degree to which someone suffers loss, harm, pain, bad feelings, or diminished value. That is: a problem about a product is a relationship between product and some person.
3) But the degree to which something is perceived to be a problem also depends on how someone, and someone's suffering, is important to the perceiver. That is a social issue, not merely a technical one. That barely gets mentioned in most testing talk, or so it seems to me.
Read 17 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!