RSS

API Documentation News

These are the news items I've curated in my monitoring of the API space that have some relevance to the API definition conversation and I wanted to include in my research. I'm using all of these links to better understand how the space is testing their APIs, going beyond just monitoring and understand the details of each request and response.

Providing Minimum Viable API Documentation Blueprints To Help Guide Your API Developers

I was taking a look at the Department of Veterans Affairs (VA) API documentation for the VA Facilities API, and intending on providing some feedback on the API implementation. The API itself is pretty sound, and I don’t have any feedback without having actually integrated it into an application, but following on the heals of my previous story about how we get API developers to follow minimum viable API documentation guidance, I had lots of feedback on the overall deliver of the documentation for the VA Facilities API, helping improve on what they have there.

Provide A Working Example of Minimum Viable API Documentation
One of the ways that you help incentivize your API developers to deliver minimum viable API documentation across their API implementations is you do as much of the work for them as you can, and provide them with a forkable, downloadable, clonable API documentation that meets the minimum viable requirements. To help illustrate what I’m talking about I created a base GitHub blueprint for what I’d suggest as a minimum viable API documentation at the VA. Providing something the VA can consider, and borrow from as they are developing their own strategy for ensuring all APIs are consistently documented.

Covering The Bare Essentials That Should Exist For All APIs
I wanted to make sure each API had the bare essentials, so I took what the VA has already done over at developer.va.gov, and republished it as a static single page application that runs 100% on GitHub pages, and hosted in a GitHub repository–providing the following essential building blocks for APIs at the VA:

  • Landing Page - Giving any API a single landing page that contains everything you need to know about working with an API. The landing page can be hosted as its own repo, and subdomain, and the linked up with other APIs using a facade page, or it could be published with many other APIs in a single repository.
  • Interactive Documentation - Providing interactive, OpenAPI-driven API documentation using Swagger UI. Providing a usable, and up to date version of the documentation that developers can use to understand what the API does.
  • OpenAPI Definition - Making sure the OpenAPI behind the documentation is front and center, and easily downloaded for use in other tools and services.
  • Postman Collection - Providing a Postman Collection for the API, and offering it as more of a transactional alternative to the OpenAPI.

That covers the bases for the documentation that EVERY API should have. Making API documentation available at a single URL to a human viewable landing page, complete with documentation. While also making sure that there are two machine readable API definitions available for an API, allowing the API documentation to be more portable, and useable in other tooling and services–letting developers use the API definitions as part of other stops along the API lifecycle.

Bringing In Some Other Essential API Documentation Elements
Beyond the landing page, interactive documentation, OpenAPI, and Postman Collection, I wanted to suggest some other building blocks that would really make sure API developers at the VA are properly documenting, communicating, as well as supporting their APIs. To go beyond the bare bones API documentation, I wanted to suggest a handful of other elements, as well as incorporate some building blocks the VA already had on the API documentation landing page for the VA Facilities API.

  • Authentication - Providing an overview of authenticating with the API using the header apikey.
  • Response Formats - They already had a listing of media types available for the API.
  • Support - Ensuring that an API has at least one support channel, if not multiple channels.
  • Road Map - Making sure there is a road map providing insights into what is planned for an API.
  • References - They already had a listing of references, which I expanded upon here.

I tried not to go to town adding all the building blocks I consider to be essential, and just contribute couple of other basic items. I feel support and road map are essential and cannot be ignored, and should always be part of the minimum viable API documentation requirements. My biggest frustrations with APIs are 1) Up to date documentation, 2) No support, and 3) Not knowing what the future holds. I’d say that I’m also increasingly frustrated when I can’t get at the OpenAPI for an API, or at least find a Postman Collection for the API. Machine readable definitions moved into the essential category for me a couple years ago–even though I know some folks don’t feel the same.

A Self Contained API Documentation Blueprint For Reuse
To create the minimum viable API documentation blueprint demo for the VA, I took the HTML template from developer.va.gov, and deployed as a static Jekyll website that runs on GitHub Pages. The landing page for the documentation is a single index.html page in the root of the site, leverage Jekyll for the user interface, but driving all the content on the page from the central config.yml for the API project. Providing a YAML checklist that API developers can follow when publishing their own documentation, helping do a lot of the heavy lifting for developers. All they have to do is update the OpenAPI for the API and add their own data and content to the config.yml to update the landing page for the API. Providing a self-contained set of API documentation that developers can fork, download, and reuse as part of their work, delivering consistent API documentation across teams.

The demo API documentation blueprint could use some more polishing and comments. I will keep adding to it, and evolving it as I have time. I just wanted to share more of my thoughts about the approach the VA could take to provide function API documentation guidance, as a functional demo. Providing them with something they could fork, evolve, and polish on their own, turning it into a more solid, viable solution for documentation at the federal agency. Helping evolve how they deliver API documentation across the agency, and ensuring that they can properly scale the delivery of APIs across teams and vendors. While also helping maximize how they leverage GitHub as part of their API lifecycle, setting the base for API documentation in a way that ensures it can also be used as part of a build pipeline to deploy APIs, as well as manage, testing, secure, and helping deliver along almost every stop along a modern API lifecycle.

The website for this project is available at: https://va-working.github.io/api-documentation/ You can access the GitHub repository at: https://github.com/va-working/api-documentation


Please Refer The Engineer From Your API Team To This Story

I reach out to API providers on a regular basis, asking them if they have an OpenAPI or Postman Collection available behind the scenes. I am adding these machine readable API definitions to my index of APIs that I monitor, while also publishing them out to my API Stack research, the API Gallery, APIs.io, work to get them published in the Postman Network, and syndicated as part of my wider work as an OpenAPI member. However, even beyond my own personal needs for API providers to have a machine readable definition of their API, and helping them get more syndication and exposure for their API, having an definition present significantly reduces friction when on-boarding with their APIs at almost every stop along a developer’s API integration journey.

One of the API providers I reached out to recently responded with this, “I spoke with one of our engineers and he asked me to refer you to https://developer.[company].com/”. Ok. First, I spend over 30 minutes there just the other day. Learning about what you do, reading through documentation, and thinking about what was possible–which I referenced in my email. At this point I’m guessing that the engineer in question doesn’t know what an OpenAPI or Postman Collection is, they do not understand the impact these specifications are having on the wider API ecosystem, and lastly, I’m guessing they don’t have any idea who I am(ego taking control). All of which provides me with the signals I need to make an assessment of where any API is in their overall journey. Demonstrating to me that they have a long ways to go when it comes to understanding the wider API landscape in which they are operating in, and they are too busy to really come out of their engineering box and help their API consumers truly be successful in integrating with their platform.

I see this a lot. It isn’t that I expect everyone to understand what OpenAPI and Postman Collections are, or even know who I am. However, I do expect people doing APIs to come out of their boxes a little bit, and be willing to maybe Google a topic before responding to question, or maybe Google the name of the person they are responding to. I don’t use a gmail.com address to communicate, I am using apievangelist.com, and if you are using a solution like Clearbit, or other business intelligence solution, you should always be retrieving some basic details about who you are communicating with, before you ever respond. That is, you do all of this kind of stuff if you are truly serious about operating your API, helping your API consumers be more successful, and taking the time to provide them with the resources they need along the way–things like an OpenAPI, or Postman Collections.

Ok, so why was this response so inadequate?

  • No API Team Present - It shows me that your company doesn’t have any humans their to support the humans that will be using your API. My email went from general support, to a backend engineer who doesn’t care about who I am, and about what I need. This is a sign of what the future will hold if I actually bake their API into my applications–I don’t need my questions lost between support and engineering, with no dedicated API team to talk to.
  • No Business Intelligence - It shows me that your company has put zero thought into the API business model, on-boarding, and support process. Which means you do not have a feedback loop established for your platform, and your API will always be deficient of the nutrients it needs to grow. Always make sure you conduct a lookup based upon on the domain, or Twitter handle or your consumers to get the context you need to understand who you are talking to.
  • Stuck In Your Bubble - You aren’t aware of the wider API community, and the impact OpenAPI, and Postman are having on the on-boarding, documentation, and other stops along the API lifecycle. Which means you probably aren’t going to keep your platform evolving with where things are headed.

Ok, so why should you have an OpenAPI and Postman Collection?

  • Reduce Onboarding Friction - As a developer I won’t always have the time to spend absorbing your documentation. Let me import your OpenAPI or Postman Collection into my client tooling of choice, register for a key and begin making API calls in seconds, or minutes. Make learning about your API a hands on experience, something I’m not going to get from your static documentation.
  • Interactive API Documentation - Having a machine readable definition for your API allows you to easily keep your documentation up to date, and make it a more interactive experience. Rather than just reading your API documentation, I should be able to make calls, see responses, errors, and other elements I will need to truly understand what you do. There are plenty of open source interactive API documentation solutions that are driven by OpenAPI and Postman, but you’d know this if you were aware of the wider landscape.
  • Generate SDKs, and Other Code - Please do not make me hand code the integration with each of your API endpoints, crafting each request and response manually. Allow me to autogenerate the most mundane aspects of integration, allowing OpenAPI and Postman Collection to act as the integration contract.
  • Discovery - Please don’t expect your potential consumers to always know about your company, and regularly return to your developer.[company].com portal. Please make your APIs portable so that they can be published in any directory, catalog, gallery, marketplace, and platform that I’m already using, and frequent as part of my daily activities. If you are in my Postman Client, I’m more likely to remember that you exist in my busy world.

These are just a few of the basics of why this type of response to my question was inadequate, and why you’d want to have OpenAPI and Postman Collections available. My experience on-boarding will be similar to that of other developers, it just happens that the application I’m developing are out of the normal range of web and mobile applications you have probably been thinking about when publishing your API. But this is why we do APIs, to reach the long tail users, and encourage innovate around our platforms. I just stepped up and gave 30 minutes of my time (now 60 minutes with this story) to learning about your platform, and pointing me to your developer.[company].com page was all you could muster in return?

Just like other developers will, if I can’t onboard with your API without friction, and I can’t tell if there is anyone home, and willing to give me the time of day when I have questions, I’m going to move on. There are other platforms that will accommodate me. The other downside of your response, and me moving on to another platform, is that now I’m not going to write about your API on my blog. Oh well? After eight years of blogging on APIs, and getting 5-10K page views per day, I can write about a topic or industry, and usually dominate the SEO landscape for that API search term(s) (ego still has control). But…I am moving on, no story to be told here. The best part of my job is there are always stories to be told somewhere else, and I get to just move on, and avoid the friction wherever possible when learning how to put APIs to work.

I just needed this single link to provide in response to my email response, before I moved on!


Provide Your API Developers With A Forkable Example of API Documentation In Action

I responded about how teams should be documenting their APIs when they have both legacy and new APIs the other day. I wanted to keep the conversation thread going with an example of one possible API documentation implementation. The best way to deliver API documentation guidance in any organization is to provide a forkable, downloadable example of whatever you are talking about. To help illustrate what I am talking about, I wanted to take one documentation solution, and publish it as a GitHub repository.

I chose to go with a simple OpenAPI 3.0 defined API contract, driving a Swagger UI driven API documentation, hosted using GitHub Pages, and managed as a GitHub repository. In my story about how teams should be documenting their APIs, I provided several API definition formations, and API documentation options–for this walk-through I wanted to narrow it down to a single combination, providing the minimum(alist) viable options possible using OpenAPI 3.0 and SwaggerUI. Of course, any federal agency implementing such a solution should wrap the documentation with their own branding, similar to the City Pairs API prototype out of GSA, which originated over at CFPB.

I used the VA Facilities API definition from the developer.va.gov portal for this sample. Mostly because it was ready to go, and relevant to the VA efforts, but also because it was using OpenAPI 3.0–I think it is worth making sure all API documentation moving forward supports is supporting the latest version of OpenAPI. The API documentation is here, the OpenAPI definition is here, and the Github repository is here, showing what is possible. There are plenty of other things I’d like to see in a baseline API documentation template, but this provides a good first draft for a true minimum viable definition.

The goal with this project is to provide a basic seed that any team could use. Next, I will add in some other building blocks, and implementation a ReDoc, DapperDox, or WSDLDoc version. Providing four separate documentation examples that developers can fork and use to document the APIs they are working on. In my opinion, one or more API documentation templates like this should be available for teams to fork or download and implement within any organization. All API governance guidance like this should have the text describing the policy, as well as one or many examples of the policy being delivered. Hopefully this projects shows an example of this in action.


How Do We Get API Developers To Follow The Minimum Viable API Documentation Guidance?

After providing some guidance the other day on how teams should be documenting their APIs, one of the follow up comments was: “Now we just have to figure out how to get the developers to follow the guidance!” Something that any API leadership and governance team is going to face as they work to implement new policies across their organization. You can craft the best possible guidance for API design, deployment, management, and documentation, but it doesn’t mean anyone is actually going to follow your guidance.

Moving forward API governance within any organization represents the cultural frontline of API operations. Getting teams to learn about, understand, and implement sensible API practices is always easier said than done. You may think your vision of the organizations API future is the right one, but getting other internal groups to buy into that vision will take a significant amount of work. It is something that will take time, resources, and be something that will always be shifting and evolving over time.

Lead By Example The best way to get developers to follow the minimum viable API documentation guidance being set forth is to do the work for them. Provide templates and blueprints of what you want them to do. Develop, provide, and evolve forkable and downloadable API documentation examples, with simple README checklists of what is expected of them. I’ve published a simple example using the VA Facilities API definition published as OpenAPI 3.0 and Swagger UI to Github Pages, with the entire thing forkable via the Github repository. It is very bare bones example of providing API documentation guidance is a package that can be reused, providing API developers with a working example of what is expected of them.

Make It A Group Effort To help get API developers on board with the minimum viable API documentation guidance being set forth, I recommend making it a group effort. Recruit help from developers to improve upon API documentation templates provided, and encourage them to extend, evolve, and push forward their individual API documentation implementations. Give API developers a stake in how you define governance for API documentation–not everyone will be up for the task, but you’d be surprised who will raise their hand to contribute if they are asked.

Provide Incentive Model This is something that will vary in effectiveness from organization to organization, but consider offering a reward, benefit, perk, or some other incentive to any group who adopts the API documentation guidance. Provide them with praise, and showcase their work. Bring exposure to their work with leadership, and across other groups. Brainstorm creatives ways of incentivizing development groups to get more involved. Establish a list of all development groups, track on ideas for incentivizing their participation and adoption, and work regularly to close them on playing an active role in moving forward your organization’s API documentation strategy.

Punish And Shame Others As a last resort, for the more badly behaved groups within our organizations, consider punishing and shaming them for not following API documentation guidance, and contributing to overall API governance efforts. This is definitely not something you should not consider doing lightly, and should only be used in special cases, but sometimes teams will need smaller or larger punitive responses to their inaction. Ideally, teams are influenced by praise, and positive examples of why API documentation standards matter, but there will always be situations where teams won’t get on board with the wider organizational API governance efforts, and need their knuckles rapped.

Making Meaningful Change Is Hard It will not be easy to implement consistent API documentation across any large organization. However, API documentation is often one of the most important stops along the API lifecycle, and should receive significant investment when it comes to API governance efforts. In most situations doing the work for developers, and providing them with blueprints to be successful will accomplish the goal of getting API developers all using a common approach to API documentation. Like any other stop along the API lifecycle, delivering consistent API documentation across distributed teams will take having a coherent strategy, with regular tactical investment to move everything forward in a meaningful way. However, once you get your API documentation house in order, many other stops along the API lifecycle will also begin to fall inline.


How Should Teams Be Documenting Their APIs When You Have Both Legacy And New APIs?

I’m continuing my work to help the Department of Veterans Affairs (VA) move forward their API strategy. One area I’m happy to help the federal agency with, is just being available to answer questions, which I also find make for great stories here on the blog–helping other federal agencies also learn along the way. One question I got from the agency recently, is regarding how the teams should be documenting their APIs, taking into consideration that many of them are supporting legacy services like SOAP.

From my vantage point, minimum viable API documentation should always include a machine readable definition, and some autogenerated documentation within a portal at a known location. If it is a SOAP service, WSDL is the format. If it is REST, OpenAPI (fka Swagger) is the format. If its XML RPC, you can bend OpenAPI to work. If it is GraphQL, it should come with its own definitions. All of these machine readable definitions should exist within a known location, and used as the central definition for the documentation user interface. Documentation should not be hand generated anymore with the wealth of open source API documentation available.

Each service should have its own GitHub/BitBucket/GitLab repository with the following:

  • README - Providing a concise title and description for the service, as well as links to all documentation, definitions, and other resources.
  • Definitions - Machine readable API definitions for the APIs underlying schema, and the surface area of the API.
  • Documentation - Autogenerated documentation for the API, driven by its machine readable definition.

Depending on the type of API being deployed and managed, there should be one or more of these definition formats in place:

  • Web Services Description Language (WSDL) - The XML-based interface definition used for describing the functionality offered by the service.
  • OpenAPI - The YAML or JSON based OpenAPI specification format managed by the OpenAPI Initiative as part of the Linux Foundation.
  • JSON Schema - The vocabulary that allows for the annotation and validation of the schema for the service being offered–it is part of OpenAPI specification as well.
  • Postman Collections - JSON based specification format created and maintained by the Postman client and development environment.
  • API Blueprint - The markdown based API specification format created and maintained by the Apiary API design environment, now owned by Oracle.
  • RAML - The YAML based API specification format created and maintained by Mulesoft.

Ideally, OpenAPI / JSON Schema is established as the primary format for defining the contract for each API, but teams should also be able to stick with what they were given (legacy), and run with the tools they’ve already purchased (RAML & API Blueprint), and convert between specifications using API Transformer.

API documentation should be published to it’s GitHub/GitLab/BitBucket repository, and hosted using one of the service static project site solutions with one of the following open source documentation:

  • Swagger UI - Open source API documentation driven by OpenAPI.
  • ReDoc - Open source API documentation driven by OpenAPI.
  • RAML - Open source API documentation driven by RAML.
  • DapperDox - DapperDox is Open-Source, and provides rich, out-of-the-box, rendering of your OpenAPI specifications, seamlessly combined with your GitHub flavoured Markdown documentation, guides and diagrams.

There are other open source solutions available for auto-generating API documentation using the core API’s definition, but these represent the leading solutions out there. Depending on the solution being used to deploy or manage an API, there might be built-in, ready to go options for deploying documentation based upon the OpenAPI, WSDL, RAML or other using AWS API Gateway, Mulesoft, or other existing vendor solution already in place to support API operations.

Even with all this effort, a repository, with a machine readable API definition, and autogenerated documentation still doesn’t provide enough of a baseline for API teams to follow. Each API documentation should possess the following within those building blocks:

  • Title and Description - Provide the concise description of what an API does from the README, and make sure it is based into the APIs definition.
  • Base URL - Have the base URL, or variable representation for a base URL present in API definitions.
  • Base Path - Provide any base path that is constant across paths available for any single API.
  • Content Types - List what content types an API accepts and returns as part of its operations.
  • Paths - List all available paths for an API, with summary and descriptions, making sure the entire surface area of an API is documented.
  • Parameters - Provide details on the header, path, and query parameters used for API path being documented.
  • Body - Provide details on the schema for the body of each API path that accepts a body as part of its operations.
  • Responses - Provide HTTP status code and reference to the schema being returned for each path.
  • Examples - Provide example requests and response for each API path being documented.
  • Schema - Document all schema being used as part of requests and responses for all APIs paths being documented.

If EVERY API possesses its own repository, and README to get going, guiding all API consumers to complete, up to date, and informative documentation that is auto-generated, a significant amount of friction during the on-boarding process can be eliminated. Additionally, friction at the time of hand-off for any service from on team to another, or one vendor to another, will be significantly reduced–with all relevant documentation available within the project’s repository.

API documentation delivered in this way provides a single known location for any human to go when putting an API to work. It also provides a single known location to find a machine readable definition that can be used to on-board using an API client like Postman, PAW, or Insomnia. The API definition provides the contract for the API documentation, but it also provides what is needed across other stops along the API lifecycle, like monitoring, testing, SDK generation, security, and client integration–reducing the friction across many stops along the API journey.

This should provide a baseline for API documentation across teams. No matter how big or small the API, or how new or old the API is. Each API should have API documentation available in a consistent, and usable way. Providing a human and programmatic way for understanding what an API does, that can be use to on-board and maintain integrations with each application. The days of PDF and static API documentation are over, and the baseline for each APIs documentation always involves having a machine readable contract as the core, and managing the documentation as part of the pipeline used to deploy and manage the rest of the API lifecycle.


Any Way You Want It: Extending Swagger UI for Fun and Profit by Kyle Shockey (@kyshoc) of SmartBear Software (@SmartBear) At @APIStrat In Nashville

We are getting closer to the 9th edition of APIStrat happening in Nashville, TN this September 24th through 26th. The schedule for the conference is up, along with the first lineup of keynote speakers, and my drumbeat of stories about the event continues here on the blog. Next up in our session lineup is “Any Way You Want It: Extending Swagger UI for Fun and Profit” by Kyle Shockey (@kyshoc) of SmartBear Software (@SmartBear) on September 25th.

Here is Kyle’s abstract for the session:

Your APIs are tailored to your needs - shouldn’t your tools be as well? In this talk, we’ll explore how Swagger UI 3 makes it easier than ever to create custom functionality, and common use cases for the power that the UI’s plugin system provides.

Learn how to:

- Create plugins that extend existing features and define new functionality - Integrate Swagger UI seamlessly by defining a custom layout - Package and share plugins that can be reused by the community (or your organization)

Swagger UI has changed the conversation around how we document our APIs, and being able to extend the interface is an important part of keeping the API documentation conversation evolving, and APIStrat is where this type of discussion is happening. You can register for the event here, and there are still sponsorship opportunities available. Don’t miss out on APIStrat this year–it is going to be a good time in Nashville as we continue the conversation we started back in 2012 with the initial edition of the API industry event in New York City.

I am looking forward to seeing you all in Nashville next month!


It Isn't Just That You Have A PDF For Your API Docs, It Is Because It Demonstrates That You Do Not Use Other APIs

I look at a lot of APIs. I can tell a lot about a company, and the people behind an API from looking at their developer portal, documentation, and other building blocks of their presence. One of the more egregious sins I feel an API provider can make when operating their API is publishing their API documentation as a PDF. This is something that was acceptable up until about 2006, but over a decade after it shows that the organization behind an API hasn’t done their homework.

The crime really isn’t the fact that an API provider is using a PDF for their documentation. I’m fine with API providers publishing a PDF version of their API, to provide a portable version of it. Where a PDF version of the documentation becomes a problem is when it is the primary version of the documentation, which demonstrates that the creators don’t get out much and haven’t used many other APIs. If an API team has done their homework, actually put other 3rd party APIs to work, they would know that PDF documentation for APIs is not the norm out in the real world.

One of the strongest characteristics an API provider can possess, is an awareness of what other API providers are doing. The leading API providers demonstrate that they’ve used other APIs, and are aware of what API consumers in the mainstream are used to. Most mainstream API consumers will simply close the tab when they encounter an API that has a PDF document for their API. Unless you have some sort of mandate to use that particular API, you are going to look elsewhere. If an API provider isn’t up to speed on what the norms are for API documentation, and are outwardly facing, the chance they’ll actively support their API is always diminished.

PDF API documentation may not seem like too big of a mistake to many enterprise, institutional, and government API providers, but it demonstrates much more than just a static representation of what an API can do. It represents an isolated, self-contained, non-interactive view of what an API can do. It reflects an API platform that is self-centered, and not really concerned with the outside world. Which often means it is an API platform that won’t always care about you, the API consumer. APIs in the age of the web are all about having an externalized view of the world, and understanding how to play nicely with large groups of developers outside of your firewall–when you publish a PDF version of your API docs, you demonstrate that you don’t get out much, and aren’t concerned with the outside world.


Avoid Being Captain Obvious When Documenting Your API

I read a lot of API documentation, and help review API portals for clients, and one of the most common rookie mistakes I see made, is people pointing out the obvious, and writing a bunch of fluffy, meaningless content that gets in the way of people actually using an API. When the obvious API industry stuff is combined with the assumed elements of what a company does, you end up with a meaningless set of obstacles that slow API integration down. Here is the most common thing I read when entering an API portal:

“This is an API for querying data from the [Company X] platform, to get access to JSON from our system which allows you to get data from our system into yours using the web. You will need to write code to make calls to our APIs documented here on the page below. Our API uses REST to accept request and provide responses in a JSON format.”

I’ve read API after API that never tells you what the API does. It just assumes you know what the company does, and then goes into verbose explanations of what API, REST, JSON, and other things that should be intuitive if an API is well designed, and immediately accessible via an API. People tend to make to many assumptions about API consumers already knowing what a company does, while also assuming they known absolutely nothing about APIs, and burying actual API documentation behind a bunch of API blah blah blah, instead of just doing and being the API.

It is another side effect of developers, database, and IT folk not being very good at thinking outside of their bubble. It goes beyond techies not having social skills, and is more about them not having to think about other people at all. They just don’t have the ability to put themselves in the shoes of someone landing on the home page of their developer portal, and not knowing anything about the company or the API, and asking themselves, “what does this person need?”. Which I get being something developers don’t think about with internal APIs, but publishing an API publicly, and not stepping back to think about what someone is going to need isn’t acceptable.

Even with my experience, I still struggle to say exactly what needs to be said. There is no perfect introduction to a complex, often abstract set of APIs. However, you can invest a little more time thinking about what others will be needing, maybe run your portal by some external people for a little coherence testing. Most of all, just try to avoid being captain obvious, or captain assumption, and writing content that states the obvious while leaving out most of the critical details you take for granted. It really is the most important lessons we can take away from providing APIs, the ability for them to push us out of our boxes, from behind our firewalls, and have to engage with the real world.


For Every Competitor You Keep Out Of Your API Docs You Are Keeping Twenty New Customers Out

It is interesting for me to still regularly come across so many API providers who have a public API portals, but insist on keeping most of their documentation behind a login. Stating that they are concerned with competitors getting access to the design of their API and the underlying schema. Revealing some indefensible API business models, and general paranoia around doing business on the web. Something that usually is a sign for me of a business that is working really hard to maintain a competitive grip within an industry, without actually having to do the hard work of innovating and moving the conversation forward.

Confident API providers know that you can put your API documentation out in the open, complete with schema, without giving away the farm. If your competition can take your API design, and underlying schema, and recreate your business–you should probably go back to the drawing board, and come up with a new business idea. Your API and schema definition is not your business. I’ve used this comparison may times–your API docs are like a restaurant menu. Can you imagine restaurants that kept them hidden until they were sure you are going to be a customer? If you think that your competition can read your menu and recreate all your dishes, then you won’t be in business very long, because your dishes probably weren’t that special to begin with.

For every competitor you keep out of your API documentation, you are keeping twenty new customers out as well. I’m guessing that your savvy competitors are going to be able to get in anyways with a fake account, or otherwise. Don’t waste your time on hiding your API and keeping it out of the view of your potential customers–invest your energy in making sure your APIs kick ass. To use the restaurant analogy again, make sure ingredients are the best, and your processes, and your service are top notch. Don’t make your menu hard to get, it just shows how out of touch you are with the mainstream world of APIs, and your worst fears will come true–someone will come along and do what you do, but even better, and you will become irrelevant.

Be proud of your APIs, and publish them prominently in your API portal. Make sure you have a OpenAPI definition handy, driving your documentation, tests, monitors, and other elements of your operations. Also make sure you have Postman Collections available, allowing your API definition to be portable and importable into the Postman client, allowing consumers to get up and running making calls in minutes, not hours or days. Get out of the way of your API consumers, don’t put up unnecessary, outdated obstacles in their way. I know that you feel you know best because you’ve been doing this for so long, and know your industry, but the world is moving on, and APIs are about doing business on the web in a much more open, accessible, and self-service way. If you aren’t moving in this direction, I’m guessing you won’t be doing what you do for much longer, because someone will come along who can move faster and be more open.


Adding A Lead To SalesForce Using The REST API

I spend a lot of time talking about the SalesForce API, using it as a reference for where the API evolution began 18 years ago, but it has been a long time since I’ve actually worked with the SalesForce API. Getting up and running with any API, especially iconic APIs that we all should be familiar with, is always an enlightening experience for me. Going from zero to understanding what is going on and actually achieving the API call(s) you want, is really what this game is all about.

As part of some work I’m doing with Streamdata.io I needed to be able to add new leads into SalesForce, and I thought it would be a good time for me to get back into the saddle with the SalesForce REST API–so I volunteered to tackle the integration. The SalesForce API wasn’t as easy to get up and running as many simpler APIs I onboard with is, as the API docs isn’t as modern as I’d expect, and what you need is buried behind multiple clicks. Once you find what you are looking for, and click numerous times, you begin to get a feel for what is going on, and the object model in use becomes a little more accessible.

In addition to finding what you need with the SalesForce REST API, you have to make sure you have a handle on the object structure and nuance of SalesForce itself. For this story, I am just working with one object–Leads. I’m using PHP to work with the API, and to begin I wanted to be able to get leads, to be able to see which leads I currently have in the system:

I will add pagination, and other elements in the future. For now, I just wanted to be able to get the latest leads I have in the system to help with with some checks on what is being added. Now that I can check to see what leads are in the system, I wanted to be able to add a lead, with the following script:

I am only displaying some of the default fields available for this example, and you can add other custom fields based upon which values you wish to add. Once I have added my lead, I wanted to be able to update with a PATCH API call:

Now I am able to add, update, and get any leads I’m working with via the SalesForce API. The project gave me a good refresher for what is possible with the SalesForce API. The API is extremely powerful, and something I want to be up to speed on so that I can intelligently respond to questions I get. I wish the SalesForce API team would spend some time modernizing their API portal and documentation, providing a more coherent separation between the different flavors of their API, and provide OpenAPI driven documentation, as well as Postman Collections. It would have saved me hours of working through their API docs, and playing around with different API calls in Postman before I was able to successfully OAuth, and make my first call against the accounts and leads API endpoints.

While I think SalesForce remains a worthwhile API to showcase when I talk about the history of APIs, and the power of providing web APIs, their overall documentation and approach is beginning to fall behind the times. SalesForce possesses many of the building blocks I recommend other API providers operate, and are very advanced in some of their support and training efforts, but their documentation, which is the biggest pain point for developers, leaves a lot to be desired. I’m used to having to jump through hurdles to get up and running APIs, so the friction for me was probably less than a newer API developer would experience. I could see some of the domain instance url, versioning, and available API paths proving to be a significant hurdle if you didn’t understand what was going on. Something that could be significantly minimized with some simpler, more modern API docs, and OpenAPI and Postman Collections available.


A README For Your Microservice Github Repository

I have several projects right now that are needed a baseline for what is expected of microservices developers when it comes to the README for their Github repository. Each microservice should be a self-contained entity, with everything needed to operate the service within a single Github repository. Making the README the front door for the service, and something that anyone engaging with a service will depend on to help them understand what the service does, and where to get at anything needed to operate the service.

Here is a general outline of the elements that should be present in a README for each microservice, providing as much of an overview as possible for each service:

  • Title - A concise title for the service that fits the pattern identified and in use across all services.
  • Description - Less than 500 words that describe what a service delivers, providing an informative, descriptive, and comprehensive overview of the value a service brings to the table.
  • Documentation - Links to any documentation for the service including any machine readable definitions like an OpenAPI definition or Postman Collection, as well as any human readable documentation generated from definitions, or hand crafted and published as part of the repository.
  • Requirements - An outline of what other services, tooling, and libraries needed to make a service operate, providing a complete list of EVERYTHING required to work properly.
  • Setup - A step by step outline from start to finish of what is needed to setup and operate a service, providing as much detail as you possibly for any new user to be able to get up and running with a service.
  • Testing - Providing details and instructions for mocking, monitoring, and testing a service, including any services or tools used, as well as links or reports that are part of active testing for a service.
  • Configuration - An outline of all configuration and environmental variables that can be adjusted or customized as part of service operations, including as much detail on default values, or options that would produce different known results for a service.
  • Road Map - An outline broken into three groups, 1) planned additions, 2) current issues, 3) change log. Providing a simple, descriptive outline of the road map for a service with links to any Github issues that support what the plan is for a service.
  • Discussion - A list of relevant discussions regarding a service with title, details, and any links to relevant Github issues, blog posts, or other updates that tell a story behind the work that has gone into a service.
  • Owner - The name, title, email, phone, or other relevant contact information for the owner, or owners of a service providing anyone with the information they need to reach out to person who is responsible for a service.

That represent ten things that each service should contain in the README, providing what is needed for ANYONE to understand what a service does. At any point in time someone should be able to land on the README for a service and be able to understand what is happening without having to reach out to the owner. This is essential for delivering individual services, but also delivery of service at scale across tens, or hundreds of services. If you want to know what a service does, or what the team behind the service is thinking you just have to READ the README to get EVERYTHING you need.

It is important to think outside your bubble when crafting and maintaining a README for each microservice. If it is not up to date, or lacking relevant details, it means the service will potentially be out of sync with other services, and introduce problems into the ecosystem. The README is a simple, yet crucial aspect of delivering services, and it should be something any service stakeholder can read and understand without asking questions. Every service owner should be stepping up to the plate and owning this aspect of their service development, and professionally owning this representation of what their service delivers. In a microservices world each service doesn’t hide in the shadows, it puts it best foot forward and proudly articulates the value it delivers or it should be deprecated and go away.


REST and gRPC Side by Side In New Google Endpoints Documentation

Google has been really moving forward with their development, and storytelling around gRPC. Their high speed to approach to doing APIs that uses HTTP/2 as a transport, and protocol buffers (ProtoBuf) as its serialized message format. Even with all this motion forward they aren’t leaving everyone doing basic web APIs behind, and are actively supporting both approaches across all new Google APIs, as well as in their services and tooling for deploying APIs in the Google Cloud–supporting two-speed APIs side by side, across their platform.

When you are using Google Cloud Endpoints to deploy and manage your APIs, you can choose to offer a more RESTful edition, as well as a more advanced gRPC edition. They’ve continued to support this approach across their service features, and tooling, by now also documenting your APIs. As part of their rollout of a supporting API portal and documentation for your Google Cloud Endpoints, you can automatically document both flavors of your APIs. Making a strong case for considering to offer both types of APIs, depending on the types of use cases you are looking to solve, and the types of developers you are catering to.

In my experience, simpler web APIs are ideal for people just getting going on their API journey, and will accommodate the evolution of 60-75% of the API deployment needs out there. Where some organizations further along in their API journey, and those providing B2B solutions, will potentially need higher performance, higher volume, gRPC APIs. Making what Google is offering with their cloud API infrastructure a pretty compelling option for helping mature API providers shift gears, or even helping folks understand that they’ll be able to shift gears down the road. You get an API deployment and management solution that simultaneously supports both speeds, but also the other supporting features, services, and tooling like documentation delivers at both speeds.

Normally I am pretty skeptical of single provider / community approaches to delivering alternative approaches to APIs. It is one of the reasons I still hold reservations around GraphQL. However with Google and gRPC they have put HTTP/2 to work, and the messaging format is open source. While the approach is definitely all Google, they have embraced the web, which I don’t see out of the Facebook led GraphQL community. I still questions Google’s motives, not because they are up to anything shady, but I’m just skeptical of EVERY company’s motivations when it comes to APIs. After eight years of doing this I don’t trust anyone to not be completely self serving. However, I’ve been tuned into gRPC for some time now and I haven’t seen any signs that make me nervous, and they keep delivering beneficial features like they did with this documentation, keeping me writing stories and showcasing what they are doing.


OpenAPI Makes Me Feel Like I Have A Handle On What An API Does

APIs are hard to talk about across large groups of people, while ensuring everyone is on the page. APIs don’t have much a visual side to them, providing a tangible reference for everyone to use by default. This is where OpenAPI comes in, helping us “see” an API, and establish a human and machine readable document that we can produce, pass around, and use as a reference to what an API does. OpenAPI makes me feel like I have a handle on what an API does, in a way that can actually have a conversation around with other people–without it, things are much fuzzier.

Many folks associate OpenAPI with documentation, code generation, or some other tooling or service that uses the specification–putting their emphasis on the tangible thing, over the definition. While working on projects, I spend a lot of time educating folks about what OpenAPI is, what it is not, and how it can facilitate communication across teams and API stakeholders. While this work can be time consuming, and a little frustrating sometimes, it is worth it. A little education, and OpenAPI adoption can go a long way to moving projects along, because (almost) everyone involved is able to be actively involved in moving API operations forward.

Without OpenAPI it is hard to consistently design API paths, as well as articulate the headers, parameters, status codes, and responses being applied across many APIs, and teams. If I ask, “are we using the sort parameter across APIs?” If there is no OpenAPI, I can’t get an immediate or timely answer, it is something that might not be reliably answered. Making OpenAPI a pretty critical conversation and collaboration driver across the API projects I’m working on. I am not even getting to the part where we are deploying, managing, documenting, or testing APIs. I’m just talking about APIs in general, and making sure everyone involved in a meeting is on the same page when we are talking about one or many APIs.

Almost every API I’m working on starts as a basic OpenAPI, even with just a title and description, published to a Github, Gitlab, Bitbucket, or other repository. Then I usually add schema definitions, which drive conversations about how the schema will be accessed, as we add paths, parameters, and other details of the requests, and responses for each API. With OpenAPI acting as the guide throughout the process, ensuring we are all on the same page, and ensuring all stakeholders, as well as myself have a handle on what is going on with each API being developed. Without OpenAPI, we can never quite be sure we are all talking about the same thing, let alone having a machine readable definition that we can all take back to our corners to get work done.


Helping Stoplight.io Get The Word Out About Version 3.0

I’ve been telling stories about what the Stoplight.io team has been building for a couple of years now. They are one of the few API service provider startups left that are doing things that interest me, and really delivering value to their API consumers. In the last couple of years, as things have consolidated, and funding cycles have shifted, there just hasn’t been the same amount of investment in interesting API solutions. So when Stoplight.io approached me to do some storytelling around their version 3.0 release, I was all in. Not just because I’m getting paid, but because they are doing interesting things, that I feel are worth talking about.

I’ve always categorized Stoplight.io as an API design solution, but as they’ve iterated upon the last couple of versions, I feel they’ve managed to find their footing, and are maturing to become one of the few truly API lifecycle solutions available out there. They don’t serve every stop along the API lifecycle, but they do focus on a handful of the most valuable stops, and most importantly, they have adopted OpenAPI as the core of what they do, allowing API providers to put Stoplight.io to work for them, as well as any other solutions that support OpenAPI at the core.

As far as the stops along the API lifecycle that they service, here is how I break them down:

  • Definitions - An OpenAPI driven way of delivering APIs, that goes beyond just a single definition, and allows you to manage your API definitions at scale, across many teams, and services.
  • Design - One of the most advanced API design GUI solutions out there, helping you craft and evolve your APIs using the GUI, or working directly with the raw JSON or YAML.
  • Virtualization - Enabling the mocking and virtualization of your APIs, allowing you to share, consume, and iterate on your interfaces long before you have deliver more costly code.
  • Testing - Provides the ability to not just test your individual APIs, but define and automate using detailed tests, assertions, and deliver a variety of scenarios to ensure APIs are doing what they should be doing.
  • Documentation - Allows for the publishing of simple, clean, but interactive documentation that is OpenAPI driven, and share with your team, and your API community through a central portal.
  • Discovery - Tightly integrated with Github, and maximizing an OpenAPI definition in a way that makes the entire API lifecycle discoverable by default.
  • Governance - Allows for teams to get a handle on the API design and delivery lifecycle, while working to define and enforce API design standards, and enforce a certain quality of service across the lifecycle.

They may describe themselves a little differently, but in terms of the way that I tag API service providers, these are the stops they service along the API lifecycle. They have a robust API definition and design core, with an attractive easy to use interface, which allows you to define, design, virtualize, document, test, and collaborate with your team, community, and other stakeholders. Which makes them a full API lifecycle service provider in my book, because they focus on serving multiple stops, and they are OpenAPI driven which allows every other stop to also be addressed using any other tools and service that supports OpenAPI–which is how you do business with APIs in 2018.

I’ve added API governance to what they do, because they are beginning to build in much of what API teams are going to need to begin delivering APIs at scale across large organizations. Not just design governance, but the model and schema management you’ll need, combined with mocking, testing, documentation, and the discovery that comes along with delivering APIs like Stoplight.io does. They reflect not just where the API space is headed with delivering APIs at scale, but what organizations need when it comes to bringing order to their API-driven, software development lifecycle in a microservices reality.

I have five separate posts that I will be publishing over the next couple weeks as Stoplight.io releases version 3.0 of their API development suite. Per my style I won’t always be directly about their product, but I’ll be talking about the solutions it deliver, but occasionally you’ll hear me mention them directly, because I can’t help it. Thanks to Stoplight.io for supporting what I do, and thanks to you my readers for checking out what Stoplight.io brings to the table. I think you are going to dig what they are up to.


Breaking Down Your Postman API Collections Into Meaningful Units Of Compute

I’m fascinated with the unit of compute as defined by a microservice, OpenAPI definition, Postman Collection, or other way of quantifying an API-driven resource. Asking the question, “how big or how small is an API?”, and working to define the small unit of compute needed at runtime. I do not feel there is a perfect answer to any of these questions, but it doesn’t mean we shouldn’t be asking the questions, and packaging up our API definitions in a more meaningful way.

As I was profiling APIs, and creating Postman Collections, the Okta team tweeted at me, their own approach to delivering their APIs. They tactically place Run in Postman buttons throughout their API documentation, as well as provide a complete listing of all the Postman Collections they have. Showcasing that they have broken up their Postman Collections along what I’d consider to be service lines. Providing small, meaningful collections for each of their user authentication and authorization APIs:

Collections Click to Run
Authentication Run in Postman
API Access Management (OAuth 2.0) Run in Postman
OpenID Connect Run in Postman
Client Registration Run in Postman
Sessions Run in Postman
Apps Run in Postman
Events Run in Postman
Factors Run in Postman
Groups Run in Postman
Identity Providers (IdP) Run in Postman
Logs Run in Postman
Admin Roles Run in Postman
Schemas Run in Postman
Users Run in Postman
Custom SMS Templates Run in Postman

Okta’s approach delivers a pretty coherent, microservices approach to crafting their Postman Collections, providing separate API runtimes for each service they bring to the table. Which I think gets at what I’m looking to understand when it comes to defining and presenting our APIs. It can be a lot more work to create your Postman Collections like this, rather than just creating one single collection, with all API paths, but I think from a API consumer standpoint, I’d rather have them broken down like this. I may not care about all APIs, and I’m just looking at getting my hands on a couple of services–why make me wade through everything?

I have imported the Postman Collections for the Okta API, and added to my API Stack research. I’m going to convert them into OpenAPI definitions so I can use beyond just Postman. I will end up merging them all back into a single OpenAPI definition, and Postman Collection for all the API paths. However, I will also be exploding them into individual OpenAPIs and Postman Collections for each individual API path, going well beyond what Okta has done. Further distilling down each unit of compute, allowing it to be profiled, executed, streamed, or other meaningful action in isolation, without the constraints of the other services surrounding it.


A Visual View Of API Responses Within Our Documentation

Interactive API documentation is nothing new. We’ve had Swagger UI, and other incarnations for over five years now. We also have API explorers, and full API lifecycle client solutions like Postman to help us engage with APIs, and be able to quickly see responses from the APIs we are consuming. In my effort to keep pushing forward the API documentation conversation I’ve been beating the drum for more visual solutions to be baked into our interactive documentation for a while now, encouraging providers to make the responses we receive much more meaningful, and intuitive for consumers.

To help drum up awareness to this aspect of API documentation I’m always on the lookout for any interesting examples of it in the wild. There was the interesting approach out of the Food and Drug Administration (FDA), and now I’ve seen one out of the web data feeds API provider Webhose.io. When you are making API calls in their interactive dashboard you get a JSON response for things like news articles on the left hand side, but you also get an interesting slider that will show a visual representation of the JSON response on the right side–making it much more readable to non-developers.

It provides a nice way to quickly make sense of API responses. Making them more accessible. Making it something that even non-developers can do. Essentially providing a reverse view source (view results?) for API responses. Taking the raw JSON, and providing an HTML lens for the humans trying to make decisions around the usage of a particular API. View source is how I learned HTML back in the mid 1990s, and I can see visualization tools for API responses helping average businesses users learn JSON, or at least make it a little less intimidating, and something they feel like they put to work for them.

I really feel like more visualizations baked into API documentation is the future of interactive API docs. Being able to see API responses rendered as HTML, or as graphs, charts, and other visualizations, makes a lot of sense. APIs are an abstract thing, and even as a developer, I have a hard time understand what is contained within each API response. I think having visual API responses will help us craft a more meaningful API request, making our API consumption much precise, and impactful. If you see any interesting visualization layers to your favorite API’s documentation, please drop me a line, I’d like to add it to my list of interesting approaches.


Axway Asking for an OpenAPI of The Streamdata.io API So They Can Screenshot It

We are working closely with Axway on a number of projects over here at Streamdata.io. After we got out of a meeting with their team the other day we received an email from them asking if we had an OpenAPI definition for a demo Streamdata.io market data API. They were wanting to include it in some marketing materials, and needed a screenshot of it. To be able to generate the visual they desired, they needed an OpenAPI to make it tangible enough for capturing in a screenshot and presenting as part of a larger story.

This may sound like a pretty banal thing, but when you step back and realize the importance of OPenAPI when it comes to communication, and making something very abstract a tangible, visual thing, it becomes more significant. You can tell someone there is a market data API, but taking a screenshot of documentation generated via an OpenAPI which displays the market data paths, a couple of parameters like stocker ticker symbol and maybe date range, and then plug in some actual values like the ticker symbol for AAPL, and show the JSON response takes things to a new level. This is OpenAPI empowered storytelling, marketing and communications in my book. Elevating what OpenAPI brings to the table to new stops along the API life cycle.

This isn’t just about documentation. This is about making an abstract API concept more visual, more meaningful, and able to be captured in an image. Axway is trying to demonstrate the value of their API solutions, coupled potentially with Streamdata.io services, in a single image–providing a lot more rich context, and visualizations that amplify their marketing materials. This isn’t just documenting what is going on so that developers know what to do with an API, this is telling stories so that business users understanding what is possible with an API–using a machine readable format like OpenAPI to help deliver the 1000 words the image will be worth.

Using OpenAPI like this reflects where I’d like to see API documentation go. Sure, we still need dynamic API documentation driven by OpenAPI definitions for developers to understand what is going on, but we need more snippets, visualization, and emotion driving solutions to exist. Things that marketers, bloggers, and other storytellers can use in their materials. We need OpenAPI-driven tools that help them plug in a relevant API definition, and generate a meaningful visual that they can use in a slide deck, blog post, or other material. We need our API documentation to speak beyond the developer community and become something that anyone can put to work in their API storytelling efforts–no coding required.


An Opportunity Around Providing A Common OpenAPI Enum Catalog

I’m down in the details of the OpenAPI specification lately, working my way through hundreds of OpenAPI definitions, trying to once again make sense of the API landscape at scale. I’m working to prepare as many API path definitions as I possibly can to be runnable within one or two clicks. OpenAPI definitions, and Postman Collections are essential to making this happen, both of which require complete details on the request surface area for an API. I need to know everything about the path, as well as any headers, path, or query parameters that need to included. A significant aspect of this definition being complete includes default, and enum values being present.

If I can’t quickly choose from a list of values, or run with a default value, when executing an API, the time to seeing a live response grows significantly. If I have to travel back to the HTML documentation, or worse, do some Googling before I can make an API call, I just went from seconds to potentially minutes or hours before I can see a real world API response. Additionally, if there are many potential values available for each API parameter, enums become critical building blocks to helping me understand all the dimensions of an API’s surface area. Something that should have been considered as part of the API’s design, but often just gets left as part of API documentation.

When playing with a Bitcoin API with the following path /blocks/{pool_name}, I need to the list of pools I can choose from. When looking to get a stock market quote from an API with the following path, /stock/{symbol}/quote, I need a list of all the ticker symbols. Having, or not having these enum values at documentation, and execute time, are essential. Many of these lists of values are so common, developers take them for granted. Assuming that API consumers just have them laying around, and really aren’t worth including in documentation. You’d think we all have lists of states, countries, stock tickers, Bitcoin pools, and other data just laying around, but even as the API Evangelist, I often find myself coming up short.

All of this demonstrates a pretty significant opportunity for someone to create a Github hosted, searchable, forkable list of common OpenAPI enum lists. Providing an easy place for API providers, and API consumers to discover simple, or complex lists of values that should be present in API documentation, and included as part of all OpenAPIs. I recommend just publishing each enum JSON or YAML list as a Github Gist, and then publishing as a catalog via a simple Github Pages website. If I don’t see something pop up in the next couple of months, I’ll probably begin publishing something myself. However, I need another API related project like I need a hole in the head, so I’m holding off in hopes another hero or champion steps up and owns the enum portion of the growing OpenAPI conversation.


Using Jeykyll And OpenAPI To Evolve My API Documentation And Storytelling

I’m reworking my API Stack work as independent sets of Jekyll collections. Historically I just dumped all APIs.json, and OpenAPIs into the central data folder, and grouped them into folders by company name. Now I am breaking them out into tag based collections, using a similar structure. Further evolving how I document and tell stories using each API. I have been published a single OpenAPI for each platform, but now I’m publishing a separate OpenAPI for each API path–we will see where this goes, it might ultimately end up biting me in the ass. I’m doing this because I want to be able to talk about a single API path, and provide a definition that can be viewed, interpreted, and executed against, independent of the other paths–Jekyll+OpenAPI is helping me accomplish this.

With each API provider possessing its own APIs.json index, and each API path having its own OpenAPI definition, I’m able to mix up how I document and tell stories around these APIs. I can list them by API provider, or by individual API path. I can filter based upon tags, and provide execute-time links that reference each individual unit of API. I have separate JavaScript functions that can be referenced if the API path is GET, POST, or PUT. I can even inherit other relevant links like API sign up or terms of service as part of its documentation. I can reference all of this as part of larger documentation, or within blog posts, and other pages throughout the website–which will be refreshed whenever I update the OpenAPI definition.

If you aren’t familiar with how Jekyll works. It is a static content solution, that allows you do develop collections. You can put CSV, JSON, or YAML into these collections (folders), and they become objects you can reference using Liquid syntax. So if I put Twitter’s APIs.json, and OpenAPI into a folder within my social collection, I can reference as site.social.twitter which is the APIs.json for Twitter’s entire API operations, and I can reference individual APIs as site.social.twitter.search for the individual OpenAPI defining the Twitter search API path. This decouples API documentation for me, and allows me to not just document APIs, but tell stories with API definitions, making my API portals much more interactive, and hopefully engaging.

I just got my API stack approach refreshed using this new format. Now I just need to go through all my APIs and rebuild the underlying Github repository. I have thousands of APIs that I track on, and I’m curious how this approach holds up at scale. While API Stack is a single repository, I can essentially publish any collection of APIs I desire to any of the hundreds of repositories that make up the API Evangelist network. Allowing me to seamless tell stories using the technical details of API operations, and the individual API resources they serve up. Further evolving how I tell stories around the APIs I’m tracking on. While my API documentation has always been interactive, I think this newer, more modular approach, reflects the value each unit of value an API brings to the table, rather than just looking to document all the APIs a provider possesses.


Labeling Your High Usage APIs and Externalizing API Metrics Within Your API Documentation

I am profiling a number of market data APIs as part of my research with Streamdata.io. As I work my way through the process of profiling APIs I am always looking for other interesting ideas for stories on API Evangelist. One of the things I noticed while profiling Alpha Vantage, was that they highlighted their high usage APIs with prominent, very colorful labels. One of the things I’m working to determine in this round of profiling is how “real time” APIs are, or aren’t, and the high usage label adds another interesting dimension to this work.

While reviewing API documentation it is nice to have labels that distinguish APIs from each other. Alpha Vantage has a fairly large number of APIs so it is nice to be able to focus on the ones that are used the most, and are more popular. For example, as part of my profiling I focused on the high usage technical indicator APIs, rather than profiling all of them. I need to be able to prioritize my work, and these labels helped me do that. Providing one example of the benefit that these types of labels can bring to the table. I’m guessing that there are many other time saving aspects of labeling popular APIs, beyond just saving me time.

This type of labeling is an interesting way of externalizing API analytics in my opinion. Which is another interesting concept to think about across API operations. How can you take the most meaningful data points across your API management processes, and distill them down, externalize and share them so that your API consumers can benefit from valuable API metrics? In this context, I could see a whole range of labels that could be established, applied to interactive documentation using OpenAPI tags, and made available across API documentation, helping make APIs even more dynamic, and in sync with how they are actually being used, measured, and making an impact on operations.

I’m a big fan of making API documentation even more interactive, alive, and meaningful to API consumers. I’m thinking that tagging and labeling is how we are going to do this in the future. Generating a very visual, but also semantic layer of meaning that we can overlay in our API documentation, making them even more accessible by API consumers. I know that Alpha Advantages’s high usage labels have saved me significant amounts work, and I’m sure there are other approaches that could continue delivering in this way. It is something I’m keeping a close eye in this increasingly event-driven, API landscape, where API integration is becoming more dynamic and real time.


Docker Engine API Has OpenAPI Download At Top Of Their API Docs

I am a big fan API providers taking ownership of their OpenAPI definition, which enables API consumers to download a complete OpenAPI then import into any client tooling like Postman, using it to generate client SDKs, and getting up to speed regarding the surface area of an API. This is why I like to showcase API providers I come across who do this well, and occasionally shame API providers who don’t do it, and demonstrate to their consumers that they don’t really understand what OpenAPI definitions are all about.

This week I am showcasing an API provider who does it well. I was on the hunt for an OpenAPI of the Docker Engine API, for use in a project I am consulting on, and was please to find that they have a button to download the OpenAPI for each version of the Docker Engine API right at the top of the page. Making it dead simple for me, as an API consumer, to get up and running with the Docker API in my tooling. OpenAPI is about much more than just the API documentation, and something that should be a first class companion to ALL API documentation for EVERY API provider out there–whether or not you are a devout OpenAPI (fka Swgger) believer.

The Docker API team just saved me a significant amount of time in tracking down another OpenAPI, which most likely would be incomplete. Let alone the amount of work that would be required to hand-craft one for my project. I was able to take the existing OpenAPI and publish to the team Github Wiki for a project I’m advising on. The team will be able to use the OpenAPI to import into their Postman Client and begin to learn about the Docker API, which will be used to orchestrate the containers they are using to operate their own microservices. A subset of this team will also be crafting some APIs that proxy the Docker API, and allow for localized management of each microservice’s underlying engine.

I had to create the Consul OpenAPI for the team last week, which took me a couple hours. I was pleased to see Docker taking ownership of their OpenAPI. This is a drum I will keep beating here on the blog, until EVERY API provider takes ownership of their OpenAPI definition, providing their consumers with a machine readable definition of their API. OpenAPI is much more than just API documentation, and is essential to making sense of what an API does, and then take that knowledge and quickly translate it into actual integration, in as short of time as possible. Don’t make integrating with your API difficult, reduce as much friction as possible, and publish an OpenAPI alongside your API documentation like Docker does.


API Life Cycle Basics: Documentation

API documentation is the number one pain point for developers trying to understand what is going on with an API, as they work to get up and running consuming the resources they possess. From many discussions I’ve had with API providers it is also a pretty big pain point for many API developers when it comes to trying to keep up to date, and delivering value to consumers. Thankfully API documentation has been being driven by API definitions like OpenAPI for a while, helping keep things up date and in sync with changes going on behind the scenes. The challenge for many groups who are only doing OpenAPI to produce documentation, is that if the OpenAPI isn’t used across the API life cycle, it will often become forgotten, recreating that timeless challenge with API documentation.

Thankfully in the last year or so I’m beginning to see more API documentation solutions emerge getting us beyond the Swagger UI age of docs. Don’t get me wrong, I’m thankful for what Swagger UI has done, but the I’m finding it to be very difficult to get people beyond the idea that OpenAPI (fka Swagger) isn’t the same thing as Swagger UI, and that the only reason you generate API definitions is to get documentation. There are a number of API documentation solutions to choose from in 2018, but Swagger UI still remains a viable choice for making sure your APIs are properly documented for your consumers.

  • Swagger UI - Do not abandon Swagger UI, keep using it, but decouple it from existing code generation practices.
  • Redoc - Another OpenAPI driven documentation solution.
  • Read the Docs - Read the Docs hosts documentation, making it fully searchable and easy to find. You can import your docs using any major version control system, including Mercurial, Git, Subversion, and Bazaar.
  • ReadMe.io - ReadMe is a developer hub for your startup or code. It’s a completely customizable and collaborative place for documentation, support, key generation and more.
  • OpenAPI Specification Visual Documentation - Thinking about how documentation can become visualized, not just text and data.

API documentation should not be static. It should always be driven from OpenAPI, JSON Schema, and other pipeline artifacts. Documentation should be part of the CI/CD build process, and published as part of an API portal life cycle as mentioned above. API documentation should exist for ALL APIs that are deployed within an organization, and used to drive conversations across development as well as business groups–making sure the details of API design are always in as plain language as possible.

I added the visual documentation as a link because I’m beginning to see hints of API documentation move beyond the static, and even dynamic realm, and becoming something more visual. It is an area I’m investing in with my subway map work, trying to develop a consistent and familiar way to document complex systems and infrastructure. Documentation doesn’t have to be a chore, and when done right it can make a developers day brighter, and help them go from learning to integration with minimal friction. Take the time to invest in this stop along your API life cycle, as it will help both you, and your consumers make sense of the resources you are producing.


We Are Not Supporting OpenAPI (fka Swagger) As We Already Published Our Docs

I was looking for an OpenAPI for the Consul API to use in a project I’m working on. I have a few tricks for finding OpenAPI out in the wild, which always starts with looking over at APIs.guru, then secondarily Githubbing it (are we to verb status yet?). From a search on Github I came across an issue on the Github repo for Hashicorp’s Consul, which asked for “improved API documentation”, a Hashicorp employee ultimately responded with “we just finished a revamp of the API docs and we don’t have plans to support Swagger at this time.”. Highlighting the continued misconception of what is “OpenAPI”, what it is used for, and how important it can be to not just providing an API, but also consuming it.

First things first. Swagger is now OpenAPI (has been for a while), an API specification format that is in the Open API Initiative (OAI), which is part of the Linux Foundation. Swagger is proprietary tooling for building with the OpenAPI specification. It’s an unfortunate and confusing situation that arose out of the move to the Open API Initiative, but it is one we need to move beyond, so you will find me correcting folks more often on this subject.

Next, let’s look at the consumer question, asking for “improved API documentation”. OpenAPI (fka Swagger) is much more than documentation. I understand this position as much of the value it delivers to the API consumer is often the things we associate with documentation delivering. It teaches us about the surface area of an API, detailing the authentication, request, and response structure. However, OpenAPI does this in a machine readable way that allows us to take the definition with us, load it up in other tooling like Postman, as well as use to autogenerate code, tests, monitors, and many other time saving elements when we are working to integrate with an API. Lesson for the API consumers here is that OpenAPI (fka Swagger) is much, much, more than just documentation.

Then, let’s look at it from the provider side. Looks like you just revamped your API documentation, without much review of the state of things when it comes to API documentation. Without being too snarky, after learning more about the design of your API, I’m guessing you didn’t look at the state of things when it comes to API design either. My objective is to not shame you for poor API design and documentation practices, just to point out you are not picking your head up and looking around much when you developed a public facing API, that many “other” people will be consuming. It is precisely the time you should be picking up your head and looking around. Lesson for the API provider her is that OpenAPI (fka Swagger) is much, much, more than just documentation.

OpenAPI (fka Swagger) is much, much, more than just documentation! Instead of me being able to fork an OpenAPI definition and share with my team members, allowing me to drive interactive documentation within our project portal, empower each team member to import the definition and getting up and running in Postman, I’m spending a couple of hours creating an OpenAPI definition for YOUR API. Once done I will have the benefits for my team that I’m seeking, but I shouldn’t have to do this. As an API provider, Consul should provide us consumers with a machine readable definition of the entire surface area of the API. Not just static documentation (that are incomplete). Please API providers, take the time to look up and study the space a little more when you are designing your APIs, and learn from others are doing when it come to delivering API resources. If you do, you’ll be much happier for it, and I’m guessing your API consumers will be as well!


The Transit Feed API Is A Nice Blueprint For Your Home Grown API Project

I look at a lot of APIs. When I land on the home page of an API portal, more often than not I am lost, confused, and unsure of what I need to do to get started. Us developers are very good at complexifying things, and making our APIs implementations as messy as our backends, and the API ideas in our heads. I suffer from this still, and I know what it takes to deliver a simple, useful API experience. It just takes time, resources, as well as knowledge to it properly, and simply. Oh, and caring. You have to care.

I am always on the hunt for good examples of simple API implementations that people can emulate, that aren’t the API rockstars like Twilio and Stripe who have crazy amounts of resources at their disposal. One good example of a simple, useful, well presented API can be found with the Transit Feeds API, which aggregates the feeds of many different transit providers around the world. When I land on the home page of Transit Feeds, I immediately know what is going on, and I go from home page to making my first API call in under 60 seconds–pretty impressive stuff, for a home grown API project.

While there are still some rough edges, Transit Feeds has all the hallmarks of a quality API implementation. Simple UI, with a clear message about what it does on the home, but most importantly an API that does one thing, and does it well–providing access to transit feeds. The site uses Github OAuth to allow me to instantly sign up and get my API key–which is how ALL APIs should work. You land on the portal, you immediately know what they do, and you have your keys in hand, making an API call, all without having to create yet another API developer account.

The Transit Feed API provides an OpenAPI for their API, and uses it to drive their Swagger UI API documentation. I wish the API documentation was embedded onto the docs page, but I’m just thankful they are using OpenAPI, and provide detailed interactive API documentations. Additionally, they have a great updates page, providing recent site, feed, and data updates across the project. To provide support they wisely use Github Issues to help provide a feedback loop with all their API consumers.

It isn’t rocket surgery. Transit Feed makes it look easy. They provide a pretty simple blueprint that the rest of us can follow. They have all the essential building blocks, in an easy to understand, easy to get up and running format. They leverage OpenAPI and Github, which should be the default for any public API. I’d love to see some POST and PUT methods for the API, encouraging for more engagement with users, but as I said earlier, I’m pretty happy with what is there, and just hope that the project owners keep investing in the Transit Feed API. It provides a great example for me to use when working with transit data, but also gives me a home grown example of an API project that any of my readers could emulate.


The Transit Feed API Is A Nice Blueprint For Your Home Grown API Project

I look at a lot of APIs. When I land on the home page of an API portal, more often than not I am lost, confused, and unsure of what I need to do to get started. Us developers are very good at complexifying things, and making our APIs implementations as messy as our backends, and the API ideas in our heads. I suffer from this still, and I know what it takes to deliver a simple, useful API experience. It just takes time, resources, as well as knowledge to it properly, and simply. Oh, and caring. You have to care.

I am always on the hunt for good examples of simple API implementations that people can emulate, that aren’t the API rockstars like Twilio and Stripe who have crazy amounts of resources at their disposal. One good example of a simple, useful, well presented API can be found with the Transfit Feeds API, which aggregates the feeds of many different transit providers around the world. When I land on the home page of Transit Feeds, I immediately know what is going on, and I go from home page to making my first API call in under 60 seconds–pretty impressive stuff, for a home grown API project.

While there are still some rough edges, Transit Feeds has all the hallmarks of a quality API implementation. Simple UI, with a clear message about what it does on the home, but most importantly an API that does one thing, and does it well–providing access to transit feeds. The site uses Github OAuth to allow me to instantly sign up and get my API key–which is how ALL APIs should work. You land on the portal, you immediately know what they do, and you have your keys in hand, making an API call, all without having to create yet another API developer account.

The Transit Feed API provides an OpenAPI for their API, and uses it to drive their Swagger UI API documentation. I wish the API documentation was embedded onto the docs page, but I’m just thankful they are using OpenAPI, and provide detailed interactive API documentations. Additionally, they have a great updates page, providing recent site, feed, and data updates across the project. To provide support they wisely use Github Issues to help provide a feedback loop with all their API consumers.

It isn’t rocket surgery. Transit Feed makes it look easy. They provide a pretty simple blueprint that the rest of us can follow. They have all the essential building blocks, in an easy to understand, easy to get up and running format. They leverage OpenAPI and Github, which should be the default for any public API. I’d love to see some POST and PUT methods for the API, encouraging for more engagement with users, but as I said earlier, I’m pretty happy with what is there, and just hope that the project owners keep investing in the Transit Feed API. It provides a great example for me to use when working with transit data, but also gives me a home grown example of an API project that any of my readers could emulate.


If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.