RSS

API Documentation News

These are the news items I've curated in my monitoring of the API space that have some relevance to the API definition conversation and I wanted to include in my research. I'm using all of these links to better understand how the space is testing their APIs, going beyond just monitoring and understand the details of each request and response.

REST and gRPC Side by Side In New Google Endpoints Documentation

Google has been really moving forward with their development, and storytelling around gRPC. Their high speed to approach to doing APIs that uses HTTP/2 as a transport, and protocol buffers (ProtoBuf) as its serialized message format. Even with all this motion forward they aren’t leaving everyone doing basic web APIs behind, and are actively supporting both approaches across all new Google APIs, as well as in their services and tooling for deploying APIs in the Google Cloud–supporting two-speed APIs side by side, across their platform.

When you are using Google Cloud Endpoints to deploy and manage your APIs, you can choose to offer a more RESTful edition, as well as a more advanced gRPC edition. They’ve continued to support this approach across their service features, and tooling, by now also documenting your APIs. As part of their rollout of a supporting API portal and documentation for your Google Cloud Endpoints, you can automatically document both flavors of your APIs. Making a strong case for considering to offer both types of APIs, depending on the types of use cases you are looking to solve, and the types of developers you are catering to.

In my experience, simpler web APIs are ideal for people just getting going on their API journey, and will accommodate the evolution of 60-75% of the API deployment needs out there. Where some organizations further along in their API journey, and those providing B2B solutions, will potentially need higher performance, higher volume, gRPC APIs. Making what Google is offering with their cloud API infrastructure a pretty compelling option for helping mature API providers shift gears, or even helping folks understand that they’ll be able to shift gears down the road. You get an API deployment and management solution that simultaneously supports both speeds, but also the other supporting features, services, and tooling like documentation delivers at both speeds.

Normally I am pretty skeptical of single provider / community approaches to delivering alternative approaches to APIs. It is one of the reasons I still hold reservations around GraphQL. However with Google and gRPC they have put HTTP/2 to work, and the messaging format is open source. While the approach is definitely all Google, they have embraced the web, which I don’t see out of the Facebook led GraphQL community. I still questions Google’s motives, not because they are up to anything shady, but I’m just skeptical of EVERY company’s motivations when it comes to APIs. After eight years of doing this I don’t trust anyone to not be completely self serving. However, I’ve been tuned into gRPC for some time now and I haven’t seen any signs that make me nervous, and they keep delivering beneficial features like they did with this documentation, keeping me writing stories and showcasing what they are doing.


OpenAPI Makes Me Feel Like I Have A Handle On What An API Does

APIs are hard to talk about across large groups of people, while ensuring everyone is on the page. APIs don’t have much a visual side to them, providing a tangible reference for everyone to use by default. This is where OpenAPI comes in, helping us “see” an API, and establish a human and machine readable document that we can produce, pass around, and use as a reference to what an API does. OpenAPI makes me feel like I have a handle on what an API does, in a way that can actually have a conversation around with other people–without it, things are much fuzzier.

Many folks associate OpenAPI with documentation, code generation, or some other tooling or service that uses the specification–putting their emphasis on the tangible thing, over the definition. While working on projects, I spend a lot of time educating folks about what OpenAPI is, what it is not, and how it can facilitate communication across teams and API stakeholders. While this work can be time consuming, and a little frustrating sometimes, it is worth it. A little education, and OpenAPI adoption can go a long way to moving projects along, because (almost) everyone involved is able to be actively involved in moving API operations forward.

Without OpenAPI it is hard to consistently design API paths, as well as articulate the headers, parameters, status codes, and responses being applied across many APIs, and teams. If I ask, “are we using the sort parameter across APIs?” If there is no OpenAPI, I can’t get an immediate or timely answer, it is something that might not be reliably answered. Making OpenAPI a pretty critical conversation and collaboration driver across the API projects I’m working on. I am not even getting to the part where we are deploying, managing, documenting, or testing APIs. I’m just talking about APIs in general, and making sure everyone involved in a meeting is on the same page when we are talking about one or many APIs.

Almost every API I’m working on starts as a basic OpenAPI, even with just a title and description, published to a Github, Gitlab, Bitbucket, or other repository. Then I usually add schema definitions, which drive conversations about how the schema will be accessed, as we add paths, parameters, and other details of the requests, and responses for each API. With OpenAPI acting as the guide throughout the process, ensuring we are all on the same page, and ensuring all stakeholders, as well as myself have a handle on what is going on with each API being developed. Without OpenAPI, we can never quite be sure we are all talking about the same thing, let alone having a machine readable definition that we can all take back to our corners to get work done.


Helping Stoplight.io Get The Word Out About Version 3.0

I’ve been telling stories about what the Stoplight.io team has been building for a couple of years now. They are one of the few API service provider startups left that are doing things that interest me, and really delivering value to their API consumers. In the last couple of years, as things have consolidated, and funding cycles have shifted, there just hasn’t been the same amount of investment in interesting API solutions. So when Stoplight.io approached me to do some storytelling around their version 3.0 release, I was all in. Not just because I’m getting paid, but because they are doing interesting things, that I feel are worth talking about.

I’ve always categorized Stoplight.io as an API design solution, but as they’ve iterated upon the last couple of versions, I feel they’ve managed to find their footing, and are maturing to become one of the few truly API lifecycle solutions available out there. They don’t serve every stop along the API lifecycle, but they do focus on a handful of the most valuable stops, and most importantly, they have adopted OpenAPI as the core of what they do, allowing API providers to put Stoplight.io to work for them, as well as any other solutions that support OpenAPI at the core.

As far as the stops along the API lifecycle that they service, here is how I break them down:

  • Definitions - An OpenAPI driven way of delivering APIs, that goes beyond just a single definition, and allows you to manage your API definitions at scale, across many teams, and services.
  • Design - One of the most advanced API design GUI solutions out there, helping you craft and evolve your APIs using the GUI, or working directly with the raw JSON or YAML.
  • Virtualization - Enabling the mocking and virtualization of your APIs, allowing you to share, consume, and iterate on your interfaces long before you have deliver more costly code.
  • Testing - Provides the ability to not just test your individual APIs, but define and automate using detailed tests, assertions, and deliver a variety of scenarios to ensure APIs are doing what they should be doing.
  • Documentation - Allows for the publishing of simple, clean, but interactive documentation that is OpenAPI driven, and share with your team, and your API community through a central portal.
  • Discovery - Tightly integrated with Github, and maximizing an OpenAPI definition in a way that makes the entire API lifecycle discoverable by default.
  • Governance - Allows for teams to get a handle on the API design and delivery lifecycle, while working to define and enforce API design standards, and enforce a certain quality of service across the lifecycle.

They may describe themselves a little differently, but in terms of the way that I tag API service providers, these are the stops they service along the API lifecycle. They have a robust API definition and design core, with an attractive easy to use interface, which allows you to define, design, virtualize, document, test, and collaborate with your team, community, and other stakeholders. Which makes them a full API lifecycle service provider in my book, because they focus on serving multiple stops, and they are OpenAPI driven which allows every other stop to also be addressed using any other tools and service that supports OpenAPI–which is how you do business with APIs in 2018.

I’ve added API governance to what they do, because they are beginning to build in much of what API teams are going to need to begin delivering APIs at scale across large organizations. Not just design governance, but the model and schema management you’ll need, combined with mocking, testing, documentation, and the discovery that comes along with delivering APIs like Stoplight.io does. They reflect not just where the API space is headed with delivering APIs at scale, but what organizations need when it comes to bringing order to their API-driven, software development lifecycle in a microservices reality.

I have five separate posts that I will be publishing over the next couple weeks as Stoplight.io releases version 3.0 of their API development suite. Per my style I won’t always be directly about their product, but I’ll be talking about the solutions it deliver, but occasionally you’ll hear me mention them directly, because I can’t help it. Thanks to Stoplight.io for supporting what I do, and thanks to you my readers for checking out what Stoplight.io brings to the table. I think you are going to dig what they are up to.


Breaking Down Your Postman API Collections Into Meaningful Units Of Compute

I’m fascinated with the unit of compute as defined by a microservice, OpenAPI definition, Postman Collection, or other way of quantifying an API-driven resource. Asking the question, “how big or how small is an API?”, and working to define the small unit of compute needed at runtime. I do not feel there is a perfect answer to any of these questions, but it doesn’t mean we shouldn’t be asking the questions, and packaging up our API definitions in a more meaningful way.

As I was profiling APIs, and creating Postman Collections, the Okta team tweeted at me, their own approach to delivering their APIs. They tactically place Run in Postman buttons throughout their API documentation, as well as provide a complete listing of all the Postman Collections they have. Showcasing that they have broken up their Postman Collections along what I’d consider to be service lines. Providing small, meaningful collections for each of their user authentication and authorization APIs:

Collections Click to Run
Authentication Run in Postman
API Access Management (OAuth 2.0) Run in Postman
OpenID Connect Run in Postman
Client Registration Run in Postman
Sessions Run in Postman
Apps Run in Postman
Events Run in Postman
Factors Run in Postman
Groups Run in Postman
Identity Providers (IdP) Run in Postman
Logs Run in Postman
Admin Roles Run in Postman
Schemas Run in Postman
Users Run in Postman
Custom SMS Templates Run in Postman

Okta’s approach delivers a pretty coherent, microservices approach to crafting their Postman Collections, providing separate API runtimes for each service they bring to the table. Which I think gets at what I’m looking to understand when it comes to defining and presenting our APIs. It can be a lot more work to create your Postman Collections like this, rather than just creating one single collection, with all API paths, but I think from a API consumer standpoint, I’d rather have them broken down like this. I may not care about all APIs, and I’m just looking at getting my hands on a couple of services–why make me wade through everything?

I have imported the Postman Collections for the Okta API, and added to my API Stack research. I’m going to convert them into OpenAPI definitions so I can use beyond just Postman. I will end up merging them all back into a single OpenAPI definition, and Postman Collection for all the API paths. However, I will also be exploding them into individual OpenAPIs and Postman Collections for each individual API path, going well beyond what Okta has done. Further distilling down each unit of compute, allowing it to be profiled, executed, streamed, or other meaningful action in isolation, without the constraints of the other services surrounding it.


A Visual View Of API Responses Within Our Documentation

Interactive API documentation is nothing new. We’ve had Swagger UI, and other incarnations for over five years now. We also have API explorers, and full API lifecycle client solutions like Postman to help us engage with APIs, and be able to quickly see responses from the APIs we are consuming. In my effort to keep pushing forward the API documentation conversation I’ve been beating the drum for more visual solutions to be baked into our interactive documentation for a while now, encouraging providers to make the responses we receive much more meaningful, and intuitive for consumers.

To help drum up awareness to this aspect of API documentation I’m always on the lookout for any interesting examples of it in the wild. There was the interesting approach out of the Food and Drug Administration (FDA), and now I’ve seen one out of the web data feeds API provider Webhose.io. When you are making API calls in their interactive dashboard you get a JSON response for things like news articles on the left hand side, but you also get an interesting slider that will show a visual representation of the JSON response on the right side–making it much more readable to non-developers.

It provides a nice way to quickly make sense of API responses. Making them more accessible. Making it something that even non-developers can do. Essentially providing a reverse view source (view results?) for API responses. Taking the raw JSON, and providing an HTML lens for the humans trying to make decisions around the usage of a particular API. View source is how I learned HTML back in the mid 1990s, and I can see visualization tools for API responses helping average businesses users learn JSON, or at least make it a little less intimidating, and something they feel like they put to work for them.

I really feel like more visualizations baked into API documentation is the future of interactive API docs. Being able to see API responses rendered as HTML, or as graphs, charts, and other visualizations, makes a lot of sense. APIs are an abstract thing, and even as a developer, I have a hard time understand what is contained within each API response. I think having visual API responses will help us craft a more meaningful API request, making our API consumption much precise, and impactful. If you see any interesting visualization layers to your favorite API’s documentation, please drop me a line, I’d like to add it to my list of interesting approaches.


Axway Asking for an OpenAPI of The Streamdata.io API So They Can Screenshot It

We are working closely with Axway on a number of projects over here at Streamdata.io. After we got out of a meeting with their team the other day we received an email from them asking if we had an OpenAPI definition for a demo Streamdata.io market data API. They were wanting to include it in some marketing materials, and needed a screenshot of it. To be able to generate the visual they desired, they needed an OpenAPI to make it tangible enough for capturing in a screenshot and presenting as part of a larger story.

This may sound like a pretty banal thing, but when you step back and realize the importance of OPenAPI when it comes to communication, and making something very abstract a tangible, visual thing, it becomes more significant. You can tell someone there is a market data API, but taking a screenshot of documentation generated via an OpenAPI which displays the market data paths, a couple of parameters like stocker ticker symbol and maybe date range, and then plug in some actual values like the ticker symbol for AAPL, and show the JSON response takes things to a new level. This is OpenAPI empowered storytelling, marketing and communications in my book. Elevating what OpenAPI brings to the table to new stops along the API life cycle.

This isn’t just about documentation. This is about making an abstract API concept more visual, more meaningful, and able to be captured in an image. Axway is trying to demonstrate the value of their API solutions, coupled potentially with Streamdata.io services, in a single image–providing a lot more rich context, and visualizations that amplify their marketing materials. This isn’t just documenting what is going on so that developers know what to do with an API, this is telling stories so that business users understanding what is possible with an API–using a machine readable format like OpenAPI to help deliver the 1000 words the image will be worth.

Using OpenAPI like this reflects where I’d like to see API documentation go. Sure, we still need dynamic API documentation driven by OpenAPI definitions for developers to understand what is going on, but we need more snippets, visualization, and emotion driving solutions to exist. Things that marketers, bloggers, and other storytellers can use in their materials. We need OpenAPI-driven tools that help them plug in a relevant API definition, and generate a meaningful visual that they can use in a slide deck, blog post, or other material. We need our API documentation to speak beyond the developer community and become something that anyone can put to work in their API storytelling efforts–no coding required.


An Opportunity Around Providing A Common OpenAPI Enum Catalog

I’m down in the details of the OpenAPI specification lately, working my way through hundreds of OpenAPI definitions, trying to once again make sense of the API landscape at scale. I’m working to prepare as many API path definitions as I possibly can to be runnable within one or two clicks. OpenAPI definitions, and Postman Collections are essential to making this happen, both of which require complete details on the request surface area for an API. I need to know everything about the path, as well as any headers, path, or query parameters that need to included. A significant aspect of this definition being complete includes default, and enum values being present.

If I can’t quickly choose from a list of values, or run with a default value, when executing an API, the time to seeing a live response grows significantly. If I have to travel back to the HTML documentation, or worse, do some Googling before I can make an API call, I just went from seconds to potentially minutes or hours before I can see a real world API response. Additionally, if there are many potential values available for each API parameter, enums become critical building blocks to helping me understand all the dimensions of an API’s surface area. Something that should have been considered as part of the API’s design, but often just gets left as part of API documentation.

When playing with a Bitcoin API with the following path /blocks/{pool_name}, I need to the list of pools I can choose from. When looking to get a stock market quote from an API with the following path, /stock/{symbol}/quote, I need a list of all the ticker symbols. Having, or not having these enum values at documentation, and execute time, are essential. Many of these lists of values are so common, developers take them for granted. Assuming that API consumers just have them laying around, and really aren’t worth including in documentation. You’d think we all have lists of states, countries, stock tickers, Bitcoin pools, and other data just laying around, but even as the API Evangelist, I often find myself coming up short.

All of this demonstrates a pretty significant opportunity for someone to create a Github hosted, searchable, forkable list of common OpenAPI enum lists. Providing an easy place for API providers, and API consumers to discover simple, or complex lists of values that should be present in API documentation, and included as part of all OpenAPIs. I recommend just publishing each enum JSON or YAML list as a Github Gist, and then publishing as a catalog via a simple Github Pages website. If I don’t see something pop up in the next couple of months, I’ll probably begin publishing something myself. However, I need another API related project like I need a hole in the head, so I’m holding off in hopes another hero or champion steps up and owns the enum portion of the growing OpenAPI conversation.


Using Jeykyll And OpenAPI To Evolve My API Documentation And Storytelling

I’m reworking my API Stack work as independent sets of Jekyll collections. Historically I just dumped all APIs.json, and OpenAPIs into the central data folder, and grouped them into folders by company name. Now I am breaking them out into tag based collections, using a similar structure. Further evolving how I document and tell stories using each API. I have been published a single OpenAPI for each platform, but now I’m publishing a separate OpenAPI for each API path–we will see where this goes, it might ultimately end up biting me in the ass. I’m doing this because I want to be able to talk about a single API path, and provide a definition that can be viewed, interpreted, and executed against, independent of the other paths–Jekyll+OpenAPI is helping me accomplish this.

With each API provider possessing its own APIs.json index, and each API path having its own OpenAPI definition, I’m able to mix up how I document and tell stories around these APIs. I can list them by API provider, or by individual API path. I can filter based upon tags, and provide execute-time links that reference each individual unit of API. I have separate JavaScript functions that can be referenced if the API path is GET, POST, or PUT. I can even inherit other relevant links like API sign up or terms of service as part of its documentation. I can reference all of this as part of larger documentation, or within blog posts, and other pages throughout the website–which will be refreshed whenever I update the OpenAPI definition.

If you aren’t familiar with how Jekyll works. It is a static content solution, that allows you do develop collections. You can put CSV, JSON, or YAML into these collections (folders), and they become objects you can reference using Liquid syntax. So if I put Twitter’s APIs.json, and OpenAPI into a folder within my social collection, I can reference as site.social.twitter which is the APIs.json for Twitter’s entire API operations, and I can reference individual APIs as site.social.twitter.search for the individual OpenAPI defining the Twitter search API path. This decouples API documentation for me, and allows me to not just document APIs, but tell stories with API definitions, making my API portals much more interactive, and hopefully engaging.

I just got my API stack approach refreshed using this new format. Now I just need to go through all my APIs and rebuild the underlying Github repository. I have thousands of APIs that I track on, and I’m curious how this approach holds up at scale. While API Stack is a single repository, I can essentially publish any collection of APIs I desire to any of the hundreds of repositories that make up the API Evangelist network. Allowing me to seamless tell stories using the technical details of API operations, and the individual API resources they serve up. Further evolving how I tell stories around the APIs I’m tracking on. While my API documentation has always been interactive, I think this newer, more modular approach, reflects the value each unit of value an API brings to the table, rather than just looking to document all the APIs a provider possesses.


Labeling Your High Usage APIs and Externalizing API Metrics Within Your API Documentation

I am profiling a number of market data APIs as part of my research with Streamdata.io. As I work my way through the process of profiling APIs I am always looking for other interesting ideas for stories on API Evangelist. One of the things I noticed while profiling Alpha Vantage, was that they highlighted their high usage APIs with prominent, very colorful labels. One of the things I’m working to determine in this round of profiling is how “real time” APIs are, or aren’t, and the high usage label adds another interesting dimension to this work.

While reviewing API documentation it is nice to have labels that distinguish APIs from each other. Alpha Vantage has a fairly large number of APIs so it is nice to be able to focus on the ones that are used the most, and are more popular. For example, as part of my profiling I focused on the high usage technical indicator APIs, rather than profiling all of them. I need to be able to prioritize my work, and these labels helped me do that. Providing one example of the benefit that these types of labels can bring to the table. I’m guessing that there are many other time saving aspects of labeling popular APIs, beyond just saving me time.

This type of labeling is an interesting way of externalizing API analytics in my opinion. Which is another interesting concept to think about across API operations. How can you take the most meaningful data points across your API management processes, and distill them down, externalize and share them so that your API consumers can benefit from valuable API metrics? In this context, I could see a whole range of labels that could be established, applied to interactive documentation using OpenAPI tags, and made available across API documentation, helping make APIs even more dynamic, and in sync with how they are actually being used, measured, and making an impact on operations.

I’m a big fan of making API documentation even more interactive, alive, and meaningful to API consumers. I’m thinking that tagging and labeling is how we are going to do this in the future. Generating a very visual, but also semantic layer of meaning that we can overlay in our API documentation, making them even more accessible by API consumers. I know that Alpha Advantages’s high usage labels have saved me significant amounts work, and I’m sure there are other approaches that could continue delivering in this way. It is something I’m keeping a close eye in this increasingly event-driven, API landscape, where API integration is becoming more dynamic and real time.


Docker Engine API Has OpenAPI Download At Top Of Their API Docs

I am a big fan API providers taking ownership of their OpenAPI definition, which enables API consumers to download a complete OpenAPI then import into any client tooling like Postman, using it to generate client SDKs, and getting up to speed regarding the surface area of an API. This is why I like to showcase API providers I come across who do this well, and occasionally shame API providers who don’t do it, and demonstrate to their consumers that they don’t really understand what OpenAPI definitions are all about.

This week I am showcasing an API provider who does it well. I was on the hunt for an OpenAPI of the Docker Engine API, for use in a project I am consulting on, and was please to find that they have a button to download the OpenAPI for each version of the Docker Engine API right at the top of the page. Making it dead simple for me, as an API consumer, to get up and running with the Docker API in my tooling. OpenAPI is about much more than just the API documentation, and something that should be a first class companion to ALL API documentation for EVERY API provider out there–whether or not you are a devout OpenAPI (fka Swgger) believer.

The Docker API team just saved me a significant amount of time in tracking down another OpenAPI, which most likely would be incomplete. Let alone the amount of work that would be required to hand-craft one for my project. I was able to take the existing OpenAPI and publish to the team Github Wiki for a project I’m advising on. The team will be able to use the OpenAPI to import into their Postman Client and begin to learn about the Docker API, which will be used to orchestrate the containers they are using to operate their own microservices. A subset of this team will also be crafting some APIs that proxy the Docker API, and allow for localized management of each microservice’s underlying engine.

I had to create the Consul OpenAPI for the team last week, which took me a couple hours. I was pleased to see Docker taking ownership of their OpenAPI. This is a drum I will keep beating here on the blog, until EVERY API provider takes ownership of their OpenAPI definition, providing their consumers with a machine readable definition of their API. OpenAPI is much more than just API documentation, and is essential to making sense of what an API does, and then take that knowledge and quickly translate it into actual integration, in as short of time as possible. Don’t make integrating with your API difficult, reduce as much friction as possible, and publish an OpenAPI alongside your API documentation like Docker does.


API Life Cycle Basics: Documentation

API documentation is the number one pain point for developers trying to understand what is going on with an API, as they work to get up and running consuming the resources they possess. From many discussions I’ve had with API providers it is also a pretty big pain point for many API developers when it comes to trying to keep up to date, and delivering value to consumers. Thankfully API documentation has been being driven by API definitions like OpenAPI for a while, helping keep things up date and in sync with changes going on behind the scenes. The challenge for many groups who are only doing OpenAPI to produce documentation, is that if the OpenAPI isn’t used across the API life cycle, it will often become forgotten, recreating that timeless challenge with API documentation.

Thankfully in the last year or so I’m beginning to see more API documentation solutions emerge getting us beyond the Swagger UI age of docs. Don’t get me wrong, I’m thankful for what Swagger UI has done, but the I’m finding it to be very difficult to get people beyond the idea that OpenAPI (fka Swagger) isn’t the same thing as Swagger UI, and that the only reason you generate API definitions is to get documentation. There are a number of API documentation solutions to choose from in 2018, but Swagger UI still remains a viable choice for making sure your APIs are properly documented for your consumers.

  • Swagger UI - Do not abandon Swagger UI, keep using it, but decouple it from existing code generation practices.
  • Redoc - Another OpenAPI driven documentation solution.
  • Read the Docs - Read the Docs hosts documentation, making it fully searchable and easy to find. You can import your docs using any major version control system, including Mercurial, Git, Subversion, and Bazaar.
  • ReadMe.io - ReadMe is a developer hub for your startup or code. It’s a completely customizable and collaborative place for documentation, support, key generation and more.
  • OpenAPI Specification Visual Documentation - Thinking about how documentation can become visualized, not just text and data.

API documentation should not be static. It should always be driven from OpenAPI, JSON Schema, and other pipeline artifacts. Documentation should be part of the CI/CD build process, and published as part of an API portal life cycle as mentioned above. API documentation should exist for ALL APIs that are deployed within an organization, and used to drive conversations across development as well as business groups–making sure the details of API design are always in as plain language as possible.

I added the visual documentation as a link because I’m beginning to see hints of API documentation move beyond the static, and even dynamic realm, and becoming something more visual. It is an area I’m investing in with my subway map work, trying to develop a consistent and familiar way to document complex systems and infrastructure. Documentation doesn’t have to be a chore, and when done right it can make a developers day brighter, and help them go from learning to integration with minimal friction. Take the time to invest in this stop along your API life cycle, as it will help both you, and your consumers make sense of the resources you are producing.


We Are Not Supporting OpenAPI (fka Swagger) As We Already Published Our Docs

I was looking for an OpenAPI for the Consul API to use in a project I’m working on. I have a few tricks for finding OpenAPI out in the wild, which always starts with looking over at APIs.guru, then secondarily Githubbing it (are we to verb status yet?). From a search on Github I came across an issue on the Github repo for Hashicorp’s Consul, which asked for “improved API documentation”, a Hashicorp employee ultimately responded with “we just finished a revamp of the API docs and we don’t have plans to support Swagger at this time.”. Highlighting the continued misconception of what is “OpenAPI”, what it is used for, and how important it can be to not just providing an API, but also consuming it.

First things first. Swagger is now OpenAPI (has been for a while), an API specification format that is in the Open API Initiative (OAI), which is part of the Linux Foundation. Swagger is proprietary tooling for building with the OpenAPI specification. It’s an unfortunate and confusing situation that arose out of the move to the Open API Initiative, but it is one we need to move beyond, so you will find me correcting folks more often on this subject.

Next, let’s look at the consumer question, asking for “improved API documentation”. OpenAPI (fka Swagger) is much more than documentation. I understand this position as much of the value it delivers to the API consumer is often the things we associate with documentation delivering. It teaches us about the surface area of an API, detailing the authentication, request, and response structure. However, OpenAPI does this in a machine readable way that allows us to take the definition with us, load it up in other tooling like Postman, as well as use to autogenerate code, tests, monitors, and many other time saving elements when we are working to integrate with an API. Lesson for the API consumers here is that OpenAPI (fka Swagger) is much, much, more than just documentation.

Then, let’s look at it from the provider side. Looks like you just revamped your API documentation, without much review of the state of things when it comes to API documentation. Without being too snarky, after learning more about the design of your API, I’m guessing you didn’t look at the state of things when it comes to API design either. My objective is to not shame you for poor API design and documentation practices, just to point out you are not picking your head up and looking around much when you developed a public facing API, that many “other” people will be consuming. It is precisely the time you should be picking up your head and looking around. Lesson for the API provider her is that OpenAPI (fka Swagger) is much, much, more than just documentation.

OpenAPI (fka Swagger) is much, much, more than just documentation! Instead of me being able to fork an OpenAPI definition and share with my team members, allowing me to drive interactive documentation within our project portal, empower each team member to import the definition and getting up and running in Postman, I’m spending a couple of hours creating an OpenAPI definition for YOUR API. Once done I will have the benefits for my team that I’m seeking, but I shouldn’t have to do this. As an API provider, Consul should provide us consumers with a machine readable definition of the entire surface area of the API. Not just static documentation (that are incomplete). Please API providers, take the time to look up and study the space a little more when you are designing your APIs, and learn from others are doing when it come to delivering API resources. If you do, you’ll be much happier for it, and I’m guessing your API consumers will be as well!


The Transit Feed API Is A Nice Blueprint For Your Home Grown API Project

I look at a lot of APIs. When I land on the home page of an API portal, more often than not I am lost, confused, and unsure of what I need to do to get started. Us developers are very good at complexifying things, and making our APIs implementations as messy as our backends, and the API ideas in our heads. I suffer from this still, and I know what it takes to deliver a simple, useful API experience. It just takes time, resources, as well as knowledge to it properly, and simply. Oh, and caring. You have to care.

I am always on the hunt for good examples of simple API implementations that people can emulate, that aren’t the API rockstars like Twilio and Stripe who have crazy amounts of resources at their disposal. One good example of a simple, useful, well presented API can be found with the Transit Feeds API, which aggregates the feeds of many different transit providers around the world. When I land on the home page of Transit Feeds, I immediately know what is going on, and I go from home page to making my first API call in under 60 seconds–pretty impressive stuff, for a home grown API project.

While there are still some rough edges, Transit Feeds has all the hallmarks of a quality API implementation. Simple UI, with a clear message about what it does on the home, but most importantly an API that does one thing, and does it well–providing access to transit feeds. The site uses Github OAuth to allow me to instantly sign up and get my API key–which is how ALL APIs should work. You land on the portal, you immediately know what they do, and you have your keys in hand, making an API call, all without having to create yet another API developer account.

The Transit Feed API provides an OpenAPI for their API, and uses it to drive their Swagger UI API documentation. I wish the API documentation was embedded onto the docs page, but I’m just thankful they are using OpenAPI, and provide detailed interactive API documentations. Additionally, they have a great updates page, providing recent site, feed, and data updates across the project. To provide support they wisely use Github Issues to help provide a feedback loop with all their API consumers.

It isn’t rocket surgery. Transit Feed makes it look easy. They provide a pretty simple blueprint that the rest of us can follow. They have all the essential building blocks, in an easy to understand, easy to get up and running format. They leverage OpenAPI and Github, which should be the default for any public API. I’d love to see some POST and PUT methods for the API, encouraging for more engagement with users, but as I said earlier, I’m pretty happy with what is there, and just hope that the project owners keep investing in the Transit Feed API. It provides a great example for me to use when working with transit data, but also gives me a home grown example of an API project that any of my readers could emulate.


The Transit Feed API Is A Nice Blueprint For Your Home Grown API Project

I look at a lot of APIs. When I land on the home page of an API portal, more often than not I am lost, confused, and unsure of what I need to do to get started. Us developers are very good at complexifying things, and making our APIs implementations as messy as our backends, and the API ideas in our heads. I suffer from this still, and I know what it takes to deliver a simple, useful API experience. It just takes time, resources, as well as knowledge to it properly, and simply. Oh, and caring. You have to care.

I am always on the hunt for good examples of simple API implementations that people can emulate, that aren’t the API rockstars like Twilio and Stripe who have crazy amounts of resources at their disposal. One good example of a simple, useful, well presented API can be found with the Transfit Feeds API, which aggregates the feeds of many different transit providers around the world. When I land on the home page of Transit Feeds, I immediately know what is going on, and I go from home page to making my first API call in under 60 seconds–pretty impressive stuff, for a home grown API project.

While there are still some rough edges, Transit Feeds has all the hallmarks of a quality API implementation. Simple UI, with a clear message about what it does on the home, but most importantly an API that does one thing, and does it well–providing access to transit feeds. The site uses Github OAuth to allow me to instantly sign up and get my API key–which is how ALL APIs should work. You land on the portal, you immediately know what they do, and you have your keys in hand, making an API call, all without having to create yet another API developer account.

The Transit Feed API provides an OpenAPI for their API, and uses it to drive their Swagger UI API documentation. I wish the API documentation was embedded onto the docs page, but I’m just thankful they are using OpenAPI, and provide detailed interactive API documentations. Additionally, they have a great updates page, providing recent site, feed, and data updates across the project. To provide support they wisely use Github Issues to help provide a feedback loop with all their API consumers.

It isn’t rocket surgery. Transit Feed makes it look easy. They provide a pretty simple blueprint that the rest of us can follow. They have all the essential building blocks, in an easy to understand, easy to get up and running format. They leverage OpenAPI and Github, which should be the default for any public API. I’d love to see some POST and PUT methods for the API, encouraging for more engagement with users, but as I said earlier, I’m pretty happy with what is there, and just hope that the project owners keep investing in the Transit Feed API. It provides a great example for me to use when working with transit data, but also gives me a home grown example of an API project that any of my readers could emulate.


API Life Cycle Basics: Documentation

API documentation is the number one pain point for developers trying to understand what is going on with an API, as they work to get up and running consuming the resources they possess. From many discussions I’ve had with API providers it is also a pretty big pain point for many API developers when it comes to trying to keep up to date, and delivering value to consumers. Thankfully API documentation has been being driven by API definitions like OpenAPI for a while, helping keep things up date and in sync with changes going on behind the scenes. The challenge for many groups who are only doing OpenAPI to produce documentation, is that if the OpenAPI isn’t used across the API life cycle, it will often become forgotten, recreating that timeless challenge with API documentation.

Thankfully in the last year or so I’m beginning to see more API documentation solutions emerge getting us beyond the Swagger UI age of docs. Don’t get me wrong, I’m thankful for what Swagger UI has done, but the I’m finding it to be very difficult to get people beyond the idea that OpenAPI (fka Swagger) isn’t the same thing as Swagger UI, and that the only reason you generate API definitions is to get documentation. There are a number of API documentation solutions to choose from in 2018, but Swagger UI still remains a viable choice for making sure your APIs are properly documented for your consumers.

  • Swagger UI - Do not abandon Swagger UI, keep using it, but decouple it from existing code generation practices.
  • Redoc - Another OpenAPI driven documentation solution.
  • Read the Docs - Read the Docs hosts documentation, making it fully searchable and easy to find. You can import your docs using any major version control system, including Mercurial, Git, Subversion, and Bazaar.
  • ReadMe.io - ReadMe is a developer hub for your startup or code. It’s a completely customizable and collaborative place for documentation, support, key generation and more.
  • OpenAPI Specification Visual Documentation - Thinking about how documentation can become visualized, not just text and data.

API documentation should not be static. It should always be driven from OpenAPI, JSON Schema, and other pipeline artifacts. Documentation should be part of the CI/CD build process, and published as part of an API portal life cycle as mentioned above. API documentation should exist for ALL APIs that are deployed within an organization, and used to drive conversations across development as well as business groups–making sure the details of API design are always in as plain language as possible.

I added the visual documentation as a link because I’m beginning to see hints of API documentation move beyond the static, and even dynamic realm, and becoming something more visual. It is an area I’m investing in with my subway map work, trying to develop a consistent and familiar way to document complex systems and infrastructure. Documentation doesn’t have to be a chore, and when done right it can make a developers day brighter, and help them go from learning to integration with minimal friction. Take the time to invest in this stop along your API life cycle, as it will help both you, and your consumers make sense of the resources you are producing.


From CI/CD To A Continuous Everything (CE) Workflow

I am evaluating an existing continuous integration and deployment workflow to make recommendations regarding how they can evolve to service their growing API lifecycle. This is an area of my research that spans multiple areas of my work, but I tend to house under what I call API orchestration. I try to always step back and look at an evolving area of the tech space as part of the big picture, and attempt to look beyond any individual company, or even the wider industry hype in place that is moving something forward. I see the clear technical benefits of CI/CD, and I see the business benefits of it as well, but I haven’t always been convinced of it as a standalone thing, and have spent the last couple of years trying understand how it fits into the bigger picture.

As I’ve been consulting with several enterprise groups working to adopt a CI/CD mindset, and having similar conversations with government agencies, I’m beginning to see the bigger picture of “continuous”, and starting to decouple it from just deployment and even integration. The first thing that is assumed, not always evident for newbies, but is always a default–is testing. You alway test before you integrate or deploy, right? As I watch groups adopt I’m seeing them struggle with making sure there are other things I feel are an obvious part of the API lifecycle, but aren’t default in a CI/CD mindset, but quickly are being plugged in–things like security, licensing, documentation, discovery, support, communications, etc. In the end, I think us technologists are good at focusing on the tech innovations, but often move right past many of the other things that are essential for the business world. I see this happening with containers, microservices, Kubernetes, Kafka, and other fast moving trends.

I guess the point I want to make is that there is more to a pipeline than just deployment, integration, and testing. We need to make sure that documentation, discovery, security, and other essentials are baked in by default. Otherwise us techies might be continuously forgetting about these aspects, and the newbies might be continuously frustrated that these things aren’t present. We need to make sure we are continuously documenting, continuously securing, and continuously communicating around training, and our continuously evolving (and sharing) our road maps. I’m sure what I’m saying isn’t anything new for the CI/CD veterans, but I’m trying to onboard new folks with the concept, and as with most areas of the tech sector I find the naming and on-boarding materials fairly deficient in possessing all the concepts large organizations are needing to make the shift.

I’m thinking I’m going to be merging my API orchestration (CI/CD) research with my overall API lifecycle research, thinking deeply about how everything from definition to deprecation fits into the pipeline. I feel like CI/CD has been highly focused on the technology of evolving how we deploy and integrate (rightfully so) for some time now, and with adoption expanding we need to zoom out and think about everything else organizations will need to be successful. I see CI/CD as being essential to decoupling the monolith, and changing culture at some of the large organizations I’m working with. I want these folks to be successful, and not fall into the trappings of only thinking about the tech, but also consider the business and political implications involved with being able to move from annual or quarterly deployments and integrations, to where they can do things in weeks, or even days.


API Deployment Templates As Part Of A Wider API Governance Strategy

People have been asking me for more stories on API governance. Examples of how it is working, or not working at the companies, organizations, institutions, and government agencies I’m talking with. Some folks are looking for top down ways of controlling large teams of developers when it comes to delivering APIs consistently across large disparate organizations, while others are looking for bottom ways to educate and incentivize developers to operate APIs in sync, working together as a large, distributed engine.

I’m approach my research into API governance as I would any other area, not from the bottom up, or top down. I’m just assembling all the building blocks I come across, then began to assemble them into a coherent picture of what is working, and what is not. One example I’ve found of an approach to helping API providers across the federal government better implement consistent API patterns is out of the General Services Administration (GSA), with the Prototype City Pairs API. The Github repository is a working API prototype, documentation and developer portal that is in alignment with the GSA API design guidelines, providing a working example that other API developers can reverse engineer.

The Prototype City Pairs API is a forkable example of what you want developers to emulate in their work. It is a tool in the GSA’s API governance toolbox. It demonstrates what developers should be working towards in not just their API design, but also the supporting portal and documentation. The GSA leads by example. Providing a pretty compelling approach to model, and a building block any API provider could add to their toolbox. I would consider a working prototype to be both a bottom up approach because it is forkable, and usable, but also top down because it can reflect wider organizational API governance objectives.

I could see mature API governance operations having multiple API design and deployment templates like the GSA has done, providing a suite of forkable, reusable API templates that developers can put to use. While not all developers would use, in my experience many teams are actually made up of reverse engineers, who tend to emulate what they know. If they are exposed to bad API design, they tend to just emulate that, but if they are given robust, well-defined examples, they will just emulate healthy patterns. I’m adding API deployment templates to my API governance research, and will keep rounding off strategies for successful API governance, that can work at a wide variety of organizations, and platforms. As it stands, there are not very many examples out there, and I’m hoping to pull together any of the pieces I can find into a coherent set of approaches folks can choose from when crafting their own approach.


An Example Of How Every API Provider Should Be Using OpenAPI Out Of The Slack Platform

[The Slack team has published the most robust and honest story about using OpenAPI, providing a blueprint that other API providers should be following](https://medium.com/slack-developer-blog/standard-practice-slack-web-openapi-spec-daaad18c7f8). What I like most about approach by Slack to develop, publish, and share their OpenAPI, is the honesty behind why their are doing it to help standardize around a single definition. [They publish and share the OpenAPI to Github](https://github.com/slackapi/slack-api-specs), which other API providers are doing, and I think should be standard operating procedure for all API providers, but they also go into the realities regarding the messy history of their API documentation--an honesty that I feel ALL API providers should be embracing. My favorite part of the story from Slack is the opening paragraph that honestly portrays how they've got here: _"The Slack Web API’s catalog of methods and operations now numbers nearly 150 reads, writes, rights, and wrongs. Its earliest documentation, much still preserved on api.slack.com today, often originated as hastily written notes left from one Slack engineer to another, in a kind of institutional shorthand. Still, it was enough to get by for a small team and a growing number of engaged developers."_ Even though we all wish we could do APIs correctly, and supporting API document perfectly from day one, this is never the reality of API operations, and something OpenAPI will not be a silver bullet for fixing all of this, but can go a long way in helping standardize what is going on across teams, and within an API community. Slack focuses on SDK development, Postman client usage, alternative forms of documentation, and mock servers as the primary reasons for publishing the OpenAPI for their API. They also share some of the back story regarding how they crafted the spec, and their decision making process behind why they chose OpenAPI over other specifications. They also share a bit of their road map regarding the API definition, and that they will be adopting v3.0 of OpenAPI v3.0, providing _"more expressive JSON schema response structures and superior authentication descriptors, specifications for incoming webhooks, interactive messages, slash commands, and the Events API tighter specification presentation within api.slack.com documentation, and example spec implementation in Slack’s own SDKs and tools"_. I've been covering leading API providers move towards OpenAPI adoption for some time. Writing about [the New York Times publishing of their OpenAPI definition to Github](https://apievangelist.com/2017/03/01/new-york-times-manages-their-openapi-using-github/), and [Box doing the same, but providing even more detail behind the how and why of doing OpenAPI](https://apievangelist.com/2017/05/22/box-goes-all-in-on-openapi/). Slack continues this trend, but showcases more of the benefits it brings to the platform, as well as the community. All API providers should be publishing and up to date OpenAPI definition to Github by default like Slack does. They should also be standardizing their documentation, mock and virtualized implementations, generating SDKs, and driving continuous integration and testing using this OpenAPI, just like Slack does. They should be this vocal about it too, encouraging the community to embrace, and ingest the OpenAPI across the on-boarding and integration process. I know some folks are still skeptical about what OpenAPI brings to the table, but increasingly the benefits are outweighing the skepticism--making it hard to ignore OpenAPI. Another thing I want to highlight in this story, is that Taylor Singletary ([@episod](https://twitter.com/episod)), reality technician, documentation & developer relations at Slack, brings an honest voice to this OpenAPI tale, which is something that is often missing from the platforms I cover. This is how you make boring ass stories about mundane technical aspects of API operations like API specifications something that people will want to read. You tell an honest story, that helps folks understand the value being delivered. You make sure that you don't sugar coat things, and you talk about the good, as well as some of the gotchas like Taylor has, and connect with your readers. It isn't rocket science, it is just about caring about what you are doing, and the human beings your platform impacts. When done right you can move things forward in a more meaningful way, beyond what the technology is capable of doing all by itself.


Getting Beyond OpenAPI Being About API Documentation

Darrel Miller has a thought provoking post on OpenAPI not being what he thought, shining a light on a very important dimension of what OpenAPI does, and doesn’t do in the API space. In my experience, OpenAPI is rarely what people think, and I want to revisit once slice of Darrel’s story, in regards to folks generally thinking OpenAPI (Swagger) as being all about API documentation. In 2017, the majority of folks I talk to think OpenAPI is about documenting your APIs–something that always makes me sad, but I get it, and is something I regularly work to combat this notion.

First, and foremost, OpenAPI is a bridge to understanding and being able to communicate around using HTTP as a transport, and our greatest hope for helping developers learn their HTTPs and 123s. I meet developers on a regular basis who are building web APIs, yet do not have a firm grasp on what HTTP is. Hell, I’ve had a career dedicated to web APIs for the last seven years, and I’m still developing my grasp on what it is, learning new things from folks like Erik Wilde (@dret), Darrel Miller (@darrel_miller), and Mike Amundsen (@mamund) on a regular basis. In the API game, you should always be learning, and the web is the center of your existence at the moment as a software engineer, and should be the focus of what you are learning about to push forward your knowledge.

Darrel has a great line in his post where he has “a higher chance of convincing developers to stop drinking Mountain Dew than to pry a documentation generator from the hands of a dev with a deadline.” Meaning, most developers don’t have the time or interest to learn about what OpenAPIs, or can do for them in their busy world, they just want the help delivering documentation–a very visual representation of the work they’ve done, and is something they can demonstrate to their boss, partners, and customers. Most developers aren’t spending the time trying to know and understand everything API, thinking deeply on the subject like Darrel and I are doing. Most don’t even have time to read our blog posts. A sad fact of doing business in the tech space, but is something us in charge of API standards and tooling, or even selling API services should be aware of.

You see an essence of this with API code generators, and API testing from OpenAPI. Although in much lesser quantities than API documentation enjoys. Developers just want the assist, they really don’t care whether it is the right way of doing things, or the wrong way, and how it fits into the bigger picture. API developers just want to get their work done, and move on. It is up to us analysts, standards shepherds, and API service providers to help educate, illuminate, and incentivize developers to get over their limiting views on what OpenAPI is and/or develop the next killer tooling that helps make their lives insanely easier like Swagger UI did for API documentation. We need to learn from the impact this tooling has made, and make sure the other lifecycle solutions we are delivering speak in similar tones.

If you are reading this piece, and are still in the camp of folks who still see OpenAPI as Swagger UI, don’t feel bad, it is a common misconception, and one that was exacerbated by the move from Swagger to OpenAPI. My recommendation is that you begin to look at OpenAPI independent of any tooling it enables. Think of it as a checklist for your HTTP learning, sharing, and communication across your API development team. It shouldn’t be just about delivering documentation, code, tests, or anything else. OpenAPI is about making sure you have the HTTP details of your API delivered in a consistent way, across not just a single APIs, but all the APIs you are delivering. OpenAPI is the bridge to where you are now with your API operations, to where you should be when it comes to the definition, design, deployment, management, and delivering sustainable contracts around the digital assets you are serving up internally, with partners, and 3rd party developers. It may see like extra work to think about it this way, but it is something that will save you time and money down the road.


Automatically Generating OpenAPI From A YAML Dataset Using Jekyll

I was brainstorming with Shelby Switzer (@switzerly) yesterday around potential projects for upcoming events we are attending, looking for interesting ideas we can push forward, and one of the ideas we settled in on, was automatically generating OpenAPIs from any open data set. We aren’t just looking for some code to do this, we are looking for a forkable, reusable way of doing this that anyone could potentially put to work making open data more accessible. It’s an interesting idea that I think could have legs, and compliment some of the existing projects I’m tackling, and would help folks make their open data more usable.

To develop a proof of concept I took one of my existing projects for publishing an API integration page within the developer portal of API providers, and replaced the hand crafted OpenAPI with a dynamic one. The project is driven from a single YAML data file, which I manage and publish using Google Sheets, and already had a static API and OpenAPI documentation, making it a perfect proof of concept. As I said, the OpenAPI is currently static YAML, so I got to work making it dynamically driven from the YAML data store. The integrations.yaml data store has eight fields, which I hd published as four separate API paths, depending on which category each entry is in. I was able to assemble the OpenAPI using a handful of variables already in the config.yaml for the project, but the rest I was able to generate by mounting the integrations.yaml, dynamically identifying the fields and the field types, and then generating the API paths, and schema definitions needed in the OpenAPI.

It’s totally hacky at the moment, and just a proof of concept, but it works. I’m using the dynamically generated OpenAPI to drive the Swagger UI documentation on the project. I’m not sure why I hadn’t thought of this before, but this is why I spend time hanging with smart folks like Shelby, who ask good questions, and are curious about pushing forward concepts like this. Liquid, the language used by Jekyll to deliver HTML in Github driven project like this is very limiting, providing some serious constraints when it comes to delivering tools like this. As I get stronger in my knowledge of it, and push the boundaries of what it can do, I’m able to do some pretty interesting things on top of YAML and JSON data stored on Github, within Jekyll sites like this. It can be pretty hacky, and would make many programmers cringe, but I like it.

While the idea needs a lot more work, it provides an interesting seed for how OpenAPI can be generated from a single (or multiple) open data file in CSV, JSON, or YAML–which Jekyll speaks natively. The possibilities to commit open data files into a Github repo and have OpenAPI, schema, documentation, and even UI elements automatically generated is pretty huge. This approach to making open data accessible holds a significant amount of potential when it comes to making the open data more discoverable, accessible, forkable, and reusable–which all open data should be by default. I will keep pushing the idea forward, and see where Shelby takes it, and report back here when I have anything more to share.


A New Minimum Viable Documentation(MVD) Jekyll Template For APIs

I am a big fan of Jekyll, the static content management system (CMS). All of API Evangelist runs as hundreds of little Jekyll driven Github repositories, in a sort of microservices concert, allowing me to orchestrate my research, data, and the stories I tell across all of my projects. I recommend that API providers launch their API portals using Jekyll, whether you choose to run on Github, or anywhere else using the light-weight portable solution. I have several Jekyll templates I use to to fork and turn into new API portals, providing me with a robust toolbox for making APIs more usable.

My friend and collaborator James Higginbotham(@launchany) has launched a new minimum viable documentation (MVD) template for APIs, providing API provides with everything they need out of the gate when it comes to a presence for their API. The MVD solution provides you with a place for your getting started, workflows, code samples, reference material, with OpenAPI as the heartbeat–providing you with everything you need when it comes to API documentation. It all is an open source package available on Github, allowing any API provider to fork and quickly change the content and look and feel to match your needs. Which in my opinion, is the way ALL API documentation solutions should be. None of us should be re-inventing the wheel when it comes to our API portals, there are too many good examples out their to follow.

I know that Jekyll is intimidating for many folks. I’m currently dealing with this on several fronts, but trust me when I say that Jekyll will become one of the most important tools in your API toolbox. It takes a bit to learn the structure of Jekyll, and get over some of the quirks of learning to program using Liquid, but once you do, it will open up a whole new world for you. It is much more than just a static content management system (CMS). For me, it’s most significant strength has become as a data management system (DMS)??, with OpenAPI as the heart. I use Jekyll (and Github) for managing all my OpenAPI definitions, JSON and YAML files, and increasingly publishing my data sets in this way instead of relying on server-side technology. If you are looking for an new solution when it comes to your API portal, I recommend taking a look at what James is up to.


When Describing Your Machine Learning APIs Work Extra Hard To Keep Things Simple

I’m spending a significant amount of time learning about machine learning APIs lately. Some of what I’m reading is easy to follow, while most of it is not. A good deal of what I’m reading is technically complex, and more on the documentation side of the conversation. Other stuff I come across is difficult to read, not because it is technical, but because it is more algorithmic marketing magic, and doesn’t really get at what is really going on (or not) under the hood.

If you are in the business of writing marketing copy, documentation, or even the API design itself, please work extra hard to keep things simple and in plain language. I read so much hype, jargon, fluff, and meaningless content about artificial intelligence and machine learning each day, I take pleasure anytime I find simple, concise, and information descriptions of what ML APIs do. In an exploding world of machine learning hype your products will stand out if they are straight up, and avoid the BS, which will pretty quickly turn off the savvy folks to whatever you are peddling.

Really, this advice applies to any API, not just machine learning. It’s just the quantity of hype we are seeing around AI and ML in 2017 is reaching some pretty extreme levels. Following the hype is easy. Writing fluffy content doesn’t take any skills. Writing simple, concise, plain language names, descriptions, and other meta data for artificial intelligence and machine learning APIs takes time, and a significant amount of contemplation regarding the message you want to be sending. The ML APIs I come across that get right to the point, are always the ones that stick around in my mind, and find a place within my research and storytelling.

We are going to continue to see an explosion in the number of algorithmic APIs, delivering across the artificial intelligence, machine learning, deep learning, cognitive, and other magical realms. The APIs that deliver real business value will survive. The ones that have simple intuitive titles, and concise yet informative description that avoid hype and buzz will be the ones that get shared, reused, and ultimately float to the top of the pile and sticking around. I’m spending upwards of 5-10 hours a week looking through AI and ML API descriptions, and when I come across something that is clearly bullshit I don’t hesitate to flag, and push it back to the back warehouses of my research, keeping my time focused on the APIs which I can easily articulate what they do, and will also make sense to my readers.

Photo Credit: Bryan Mathers (Machine Learning)


The Effect of Visual Design and Information Content on Readers’ Assessments of API Reference Topics

I have seen a number of research projects looking at API documentation, but this is the most detailed study into how people are seeing, or not seeing the API documentation and other resources we are providing. It is a dissertation for Robert Bennett Watson out of the University of Washington on the Effect of Visual Design and Information Content on Readers’ Assessments of API Reference Topics.

I gave the research paper a read through and it is some lofty academic stuff, but it touches on a number of the things I write about on API Evangelist when it comes to the cognitive load associated with understanding what an API does. I found the resulting conversation from the research to be the most interesting part, discussing how we can improve the flow with our API documentation and reduce interruption time, or as I often call it, “friction”. There are a wealth of ideas in there for helping us think more critically about our API documentation, which has been repeatedly identified as the number one problem area for our developers.

If you are in the business of creating any new API documentation startup your team should be digesting Mr. Watson’s work. This is the first official academic work I’ve seen on the subject of API documentation and is something I’ll be revisiting regularly, attempting to distil down any words of wisdom for my readers. I feel like this work is a sign of larger movements towards the API space beginning to get more coherent in how we approach our API operations. I’m hoping it is something that will lay the groundwork for some more useful API documentation services and tooling.


API Documentation From SDK Bridge

This post is a straight up copy and paste from an email newsletter I get from Peter Gruenbaum of SDK Bridge. I am a big supporter of API service providers like SDK Bridge, who has been doing API documentation the entire time I’ve been the API Evangelist. Peter isn’t looking to be the next big startup, he’s just operating a successful API service that addresses one of the biggest problems API providers face–documentation. Some of my readers might not be aware these types of services exist, which is why I’m copy / pasting this, and helping spread the good word.


People often ask me what the best tool for API documentation is. There is no simple answer to this question. It depends a lot on what your API looks like, who your developers are, and what kind of support you can give your content system. This is a quick newsletter to pass on a review of free and open source API documentation tools that you might consider using.

Also, there’s a 60% off sale on our Udemy courses for you newsletter readers for the first 10 students to sign up:

API Documentation 1: JSON and XML for Technical Writers: $10 (normally $25) API Documentation 2: REST for Technical Writers: $16 (normally $40) API Documentation 3: The Art of API Documentation: $10 (normally $25) Coding for Writers 1: Basic Programming: $18 (normally $45)

  • Peter Gruenbaum, President, SDK Bridge

Free and Open Source API Documentation Tools Diána Lakatos has written an excellent description of several free and open source tools that can read the standard API definition formats OpenAPI, RAML, and API Blueprint. In addition, she covers API documentation tools that require non-standard formats, and general purpose open source documentation tools that can be used for API documentation that you may want to consider.

She provides screenshots and links to demos for each of these tools. If you want a quick overview of the tools, scroll to the bottom to read the summary table. You can find the article here: Free and Open Source API Documentation Tools.


OpenAPI-Driven Documentation For Your API With ReDoc

ReDoc is the responsive, three-panel, OpenAPI specification driven documentation for your API that you were looking for. Swagger UI is still reigning king when it comes to API documentation generated using the OpenAPI Spec, but ReDoc provides a simple, attractive, and clean alternative to documentation.

ReDoc is deployable to any web page with just two tags--with the resulting documentation looking attractive on both web and mobile devices. Now you can have it all, your API documentation looking good, interactive, and driven by a machine-readable definition that will help you keep everything up to date.

All you need to fire up ReDoc is two lines of HTML on your web page:

The quickest way to deploy ReDoc is using the CDN step shown above, but they also provide bower or npm solutions, if that is your desire. There is also a Yeoman generator to help you share your OpenAPIs that are central of your web application operation, something we will write about in future posts here on the blog.

ReDoc leverages a custom HTML tag, and provides you with a handful of attributes for defining, and customizing their documentation, including specurl, scroll-y-offset, suppress-warnings, lazy-rendering, hid-hostname, and expand-responses--providing some quick ways to get exactly what you need, on any web page.

There is a handful of APIs who have put ReDocs to use as API documentation for their platform:

There also provide a live demo of ReDoc, allowing you to kick the tires some more before you deploy, and make sure it does what you will need it to before you fork.

ReDoc provides a simple, OpenAPI spec compliant way of delivering attractive, interactive, responsive and up to date documentation that can be deployed anywhere, including integration into your existing continuous integration, and API lifecycle. ReDoc reflects a new generation of very modular, plug and play API tooling that can be put to use immediately as part of an OpenAPI Spec-driven web, mobile, and device application development cycle(s).

ReDoc is available on Github: https://github.com/Rebilly/ReDoc, as an open source solution brought to you by Rebilly, “the world's first subscription and recurring profit maximization company".


If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.