This website is currently dormant!
RSS

API Definitions News

These are the news items I've curated in my monitoring of the API space that have some relevance to the API definition conversation and I wanted to include in my research. I'm using all of these links to better understand how the space is defining not just their APIs, but their schema, and other moving parts of their API operations.

The API Transparency Discussion Is Not Exclusively About Being Public Or Private

When I talk about companies using APIs to be more transparent, one of the immediate comments I receive from folks is that "not everyone can be public by default". I agree with this situation, but I always counter with an introduction to the concept that transparency can be applied in strictly internal or partner situations as well--public is not the only type of transparency out there.

I am a big proponent of the public version of API driven transparency, but I also feel it can be applied within the firewall, as well as on the open web. Simple developer portals, with a quality selection of valuable APIs, up to date interactive documentation, and other resources, available at a known, yet secured location, can go a long way to stimulate integration--both human (team) and system. Self-service access to API design, definitions, deployment, management, testing, and other life cycle strategies, as well as the API resources can go a long way to establish a rich environment for collaboration, reuse, and consistency in API strategy across an organization.

The benefits of internal and partner layers approaches to transparency, as well as public transparency helps break down silos. Think about the separations between some of the other groups in your organization, and possibly with your leadership up the ladder. Would it help if the road map for your team was out in the open for anyone within your company to follow? Would conversations around outages and system stability be more productive if they included a wider group--maybe preventing some of the micro aggression that occur behind closed doors? 

There are many, many ways API can bring transparency to your organization, way before you ever consider doing it publicly. When I say API, this starts with the technical endpoints, but a modern API conversation ALWAYS involves documentation, code, communications, and feedback loops. The Amazon Web Services family of APIs isn't just purely about the compute and storage endpoints. It is also about the self-service documentation, videos, tutorials, case studies, the 24/7 community forums, and paid tiers of premium support, that makes it all go round.

How could API driven transparency break down the silos in your organization, and help things operate a little more efficiently?


When Done Right APIs Can Deliver On Transparency

Transparency is going to become more and more critical, as more of our daily lives continue to move online.

 

http://thenextweb.com/insider/2015/12/11/big-data-and-transparency-why-it-matters-to-you/


If Twitter Can Deliver Transparency Around API Access and Business Model, They Might Be Able To Find Their Way Again

It has been a year or two since I've given any deep thought to the Twitter ecosystem. There has been such little meaningful action to come out of the social platform over the last couple years, I had all but given up on it being a platform with any sort of future for developers.

When Jack Dorsey apologized and solicited feedback from the community this week, I honestly felt I didn't have much to offer when it comes to advice--I just wasn't prepared. Twitter does almost everything right when it comes to their developer ecosystem...well on the surface, where they do fail, is within the most critical areas of API operations:

  • Communication - Twitter communicates about the usual mundane API platform stuff, but really lacks a tone and substance that really cuts through the BS for developers. In my opinion it will take years to recover from the tone set by Dick Costolo & Ryan Sarver--they just did not respect developers at all, and it showed. This isn't something you recover from quickly.
  • Access & Rate Limits - While communications around access level and rate limits have gotten much more clearer, with a dedicated area in dev portal, and available inline within documentation, it will take going the extra mile before developers feel like there is truly a level playing field. It will require more API access to account level usage, notifications, dashboards, etc. to heal these wounds--real-time communication.
  • Business Model - There is just no clear business model evident from the developer ecosystem--something that showcases how Twitter is making money, where the API fits into this, and the opportunity for developers to play in this game. I've broken down the pricing confusion involved with using the REST API, and Streaming API, and tried to understand across the different levels of access--even though my research is dated, the confusion still exists. There is no clarity when it comes to how I can grow my usage of Twitter (as a startup), therefore many resort to very bad behavior to make ends meet.
  • Personality - There is no personality or face to the Twitter platform, thus it always defaults up the chain to current CEO. Maybe a new director of platform came on after Ryan Sarver left, but I couldn't tell you who it was. The platform needs a leader who will set the same tone Jack Dorsey is trying to set by returning to the CEO Position--Twitter needs a Jack who is specifically in charge of the platform vision, message, and will be the friendly face of the platform turnaround.

The lack of communication, overall tone, and lack of transparency and clarity around access, rate limits, and the business model has just left a bad taste in the mouths of Twitter developers. I use the Twitter API actively as part of my business, but I would never consider building a new product that depends on the Twitter API--never, and I know many other developers feel the same.

I've long moved on from my utopian views of what the Twitter API could have been, there are too many powerful actors who are investors, and partners  now for this ever to be a reality. I do not think Twitter should entertain and do business with ALL 3rd party developers, and eliminate rate limits--I am more experienced in API operations than this. However Twitter has to provide a clear ladder that developers can climb when it comes to accessing the API, ascending all the way up to partner levels, with transparency all along the way.

Twitter has to be communicating with the community at every step, and honestly the community needs to get better at communicating with Twitter as well. I respect the challenge that Twitter has when it comes to managing a million developers, and know these things are not easy, and it makes me happy to see Jack taking a clear stance, and reach out to the community for feedback.

As Jack Dorsey said, transparency is the key. Transparency around API access, rate limits, the road-map, business model, partner and developer revenue opportunities, will be critical going forward. It makes me happy to see language like "Developer Rules of the Road" finally gone from the legal side of platform operations--it was language like this that set a really bad tone for developers. I really hope Twitter can turn this corner, and  reboot its ecosystem, as the platform plays a critical role in almost every industry, and in my opinion is the most important API out there today. i would really like to be able to showcase Twitter as a platform that all API providers should emulate once again.


Algorithmic Transparency With Containers and APIs

I believe in the ability of APIs to pull back the curtain of the great OZ, that we call IT. The average business and individual technology consumer has long been asked to just believe in the magic behind the tech we use, putting the control into the hands of those who are in the know. This is something that has begun to thaw, with the introduction of the Internet, and the usage of web APIs to drive applications of all shapes and sizes. 

It isn't just that we are poking little holes into the corporate and government firewall, to drive the next generation of applications, it is also that a handful of API pioneers like Amazon, Flickr, Twitter, Twilio, and others saw the potential of making these exposed resources available to any developers. The pulling back of the curtain was done via these two acts, exposing resources using the Internet, but also inviting in 3rd parties to learn about, and tap into these resources.

Something that is present in this evolution of software development, is trust. API providers have to have a certain amount of trust that developers will respect their terms of service, and API consumers have to have a certain amount of trust that they can depend on API providers. To achieve this, there needs to be a healthy dose of transparency present, so API providers can see into what consumers are doing with their resources, and API consumers have to be able to see into the operations, and roadmap for the platform.

When transparency and trust does not exist, this is when the impact of APIs begin to break down and they become simply another tech tool. If a platform is up to no good, has ill intentions, selling vapor ware, or there is corruption behind the scenes, the API concept is just going to create problems, for both provider and consumer. How much is exposed via an API interface is up to the API designer, architect, and ultimatley the governing organization. 

There are many varying motivations behind why companies open up APIs, and the reasons they make them public or not. APIs allow companies to keep control over their data, content, and algorithmic resources, while also opening them up so "innovation" can occur, or simply be accessible by 3rd party resources to bypass the historical friction or bottleneck that is IT and developer groups. Some companies I work with are aware of this balance being struck, while many others are not aware at all, they simple are trying to make innovation happen, or provide access to resources.

As I spend some brain cycles pondering algorithmic transparency, and the recent concept of "surge pricing" used by technology providers like Uber and Gogo, I am looking to understand how APIs can help pull back the curtain that is in front of many algorithms impacting our lives. in the same way APIs have pulled back the curtains on traditional IT operations, and software development. As part of this thought exercise I'm thinking about the role Docker and other virtualized contaniners can play in providing us with more transparency in how algorithms are making decisions around us.

When I deploy one of my APIs using my microservices model, it has two distinct API layers, one for the container, and one for what runs inside of it. Docker comes ready to go with an API for all aspects of it operations--here is an Swagger definition of it. What if all algorithms came with an API by default, just like each Docker container does? We would put algorithms into containers, it would have an interface for every aspect of its operation. The API wouldn't expose the actual inner workings of the algorithm, and its calculations, but provide a complete interface for all its functionality.

How much of this API a company wished to expose, would vary just like with APIs, but companies who cared about the trust balance between them, their developers, and end-users, could offer a certain amount of transparency to build trust. The API wouldn't give away the proprietary algorithm, but would give 3rd party groups a way to test assumptions, and verify the promises made around what an alogorithm delivers, thus pulling back the curtain. With no API, we have to trust Uber, GoGo and other providers about what goes into their surge pricing. With an API, 3rd party regulators, and potentially any individual could run tests, validating what is being presented as algorithmic truth. 

I know many companies, entrepreneurs, and IT folks will dismiss this as bullshit. I'm used to that. Most of them don't follow my beliefs around the balance between the tech, business, and politics of APIs, as well as the balance between platform, developer, end-users, and what I consider to be an invetiable collison with government regulation. For now this is just a thought exercise, but it is something I will be studying, and pondering more, and as I gather more examples of algorthmic, or "surge pricing", I will work to evolve these thoughts, and solidify into something more presentable.


Pushing Forward Algorithmic Transparency When It Comes To The Concept Of Surge Pricing

I've been fascinated by the idea of surge pricing, since Uber introduced the concept to me. I'm not interested in it because of what it will do for my business, I'm interested because of what it will do for / to business. Also I'm concerned what this will do the layers of our society who can't afford, and aren't able to keep up with this algorithmic meritocracy we are assembling.

While listening to my podcasts the other day, I learned that Gogo Inflight wifi also uses surge pricing, which is why some flights are more expensive than others. I long suspected they used some sort of algorithm for figuring out their pricing, because some flights I'm spending $10.00 for the flight, and others I m paying $30.00. Obviously they are in search the sweet spot, to make the most money off business travelers looking to get their fix.

Algorithmic transparency is something I'm very interested in, and something I feel APIs have a huge role to play in helping us make sense of just exactly how companies are structuring their cost structures. This is right up my alley, and something I will add to my monitoring, searching for stories that mention surge pricing, and startups who wield this publicly as part of their strategy, as well as those who work to keep it a secret.

This is where my research starts going beyond just APIs, but it is also an area I hope to influence with some API way of thinking. We'll see where it all goes, hopefully by tuning in early, I can help steer some thinking when it comes to businesses approaching surge pricing (or not). 


An API Evangelism Strategy To Map The Global Family Tree

In my work everyday as the API Evangelist, I get to have some very interesting conversations, with a wide variety of folks, about how they are using APIs, as well as brainstorming other ways they can approach their API strategy allowing them to be more effective. One of the things that keep me going in this space is this diversity. One day I’m looking at Developer.Trade.Gov for the Department of Commerce, the next talking to WordPress about APIs for 60 million websites, and then I’m talking with the The Church of Jesus Christ of Latter-day Saints about the Family Search API, which is actively gathering, preserving, and sharing genealogical records from around the world.

I’m so lucky I get to speak with all of these folks about the benefits, and perils of APIs, helping them think through their approach to opening up their valuable resources using APIs. The process is nourishing for me because I get to speak to such a diverse number of implementations, push my understanding of what is possible with APIs, while also sharpening my critical eye, understanding of where APIs can help, or where they can possibly go wrong. Personally, I find a couple of things very intriguing about the Family Search API story:

  1. Mapping the worlds genealogical history using a publicly available API — Going Big!!
  2. Potential from participation by not just big partners, but the long tail of genealogical geeks
  3. Transparency, openness, and collaboration shining through as the solution beyond just the technology
  4. The mission driven focus of the API overlapping with my obsession for API evangelism intrigues and scares me
  5. Have existing developer area, APIs, and seemingly necessary building blocks but failed to achieve a platform level

I’m open to talking with anyone about their startup, SMB, enterprise, organizational, institutional, or government API, always leaving open a 15 minute slot to hear a good story, which turned into more than an hour of discussion with the Family Search team. See, Family Search already has an API, they have the technology in order, and they even have many of the essential business building blocks as well, but where they are falling short is when it comes to dialing in both the business and politics of their developer ecosystem to discover the right balance that will help them truly become a platform—which is my specialty. ;-)

This brings us to the million dollar question: How does one become a platform?

All of this makes Family Search an interesting API story. The scope of the API, and to take something this big to the next level, Family Search has to become a platform, and not a superficial “platform” where they are just catering to three partners, but nourishing a vibrant long tail ecosystem of website, web application, single page application, mobile applications, and widget developers. Family Search is at an important reflection point, they have all the parts and pieces of a platform, they just have to figure out exactly what changes need to be made to open up, and take things to the next level.

First, let’s quantify the company, what is FamilySearch? “ For over 100 years, FamilySearch has been actively gathering, preserving, and sharing genealogical records worldwide”, believing that “learning about our ancestors helps us better understand who we are—creating a family bond, linking the present to the past, and building a bridge to the future”.

FamilySearch is 1.2 billion total records, with 108 million completed in 2014 so far, with 24 million awaiting, as well as 386 active genealogical projects going on. Family Search provides the ability to manage photos, stories, documents, people, and albums—allowing people to be organized into a tree, knowing the branch everyone belongs to in the global family tree.

FamilySearch, started out as the Genealogical Society of Utah, which was founded in 1894, and dedicate preserving the records of the family of mankind, looking to "help people connect with their ancestors through easy access to historical records”. FamilySearch is a mission-driven, non-profit organization, ran by the The Church of Jesus Christ of Latter-day Saints. All of this comes together to define an entity, that possesses an image that will appeal to some, while leave concern for others—making for a pretty unique formula for an API driven platform, that doesn’t quite have a model anywhere else.

FamilySearch consider what they deliver as as a set of record custodian services:

  • Image Capture - Obtaining a preservation quality image is often the most costly and time-consuming step for records custodians. Microfilm has been the standard, but digital is emerging. Whether you opt to do it yourself or use one of our worldwide camera teams, we can help.
  • Online Indexing - Once an image is digitized, key data needs to be transcribed in order to produce a searchable index that patrons around the world can access. Our online indexing application harnesses volunteers from around the world to quickly and accurately create indexes.
  • Digital Conversion - For those records custodians who already have a substantial collection of microfilm, we can help digitize those images and even provide digital image storage.
  • Online Access - Whether your goal is to make your records freely available to the public or to help supplement your budget needs, we can help you get your records online. To minimize your costs and increase access for your users, we can host your indexes and records on FamilySearch.org, or we can provide tools and expertise that enable you to create your own hosted access.
  • Preservation - Preservation copies of microfilm, microfiche, and digital records from over 100 countries and spanning hundreds of years are safely stored in the Granite Mountain Records Vault—a long-term storage facility designed for preservation.

FamilySearch provides a proven set of services that users can take advantage of via a web applications, as well as iPhone and Android mobile apps, resulting in the online community they have built today. FamilySearch also goes beyond their basic web and mobile application services, and is elevated to software as a service (SaaS) level by having a pretty robust developer center and API stack.

Developer Center
FamilySearch provides the required first impression when you land in the FamilySearch developer center, quickly explaining what you can do with the API, "FamilySearch offers developers a way to integrate web, desktop, and mobile apps with its collaborative Family Tree and vast digital archive of records”, and immediately provides you with a getting started guide, and other supporting tutorials.

FamilySearch provides access to over 100 API resources in the twenty separate groups: Authorities, Change History, Discovery, Discussions, Memories, Notes, Ordinances, Parents and Children, Pedigree, Person, Places, Records, Search and Match, Source Box, Sources, Spouses, User, Utilities, Vocabularies, connecting you to the core FamilySearch genealogical engine.

The FamilySearch developer area provides all the common, and even some forward leaning technical building blocks:

To support developers, FamilySearch provides a fairly standard support setup:

To augment support efforts there are also some other interesting building blocks:

Setting the stage for FamilySearch evolving to being a platform, they even posses some necessary partner level building blocks:

There is even an application gallery showcasing what web, mac & windows desktop, and mobile applications developers have built. FamilySearch even encourages developers to “donate your software skills by participating in community projects and collaborating through the FamilySearch Developer Network”.

Many of the ingredients of a platform exist within the current FamilySearch developer hub, at least the technical elements, and some of the common business, and political building blocks of a platform, but what is missing? This is what makes FamilySearch a compelling story, because it emphasizes one of the core elements of API Evangelist—that all of this API stuff only works when the right blend of technical, business, and politics exists.

Establishing A Rich Partnership Environment

FamilySearch has some strong partnerships, that have helped establish FamilySearch as the genealogy service it is today. FamilySearch knows they wouldn’t exist without the partnerships they’ve established, but how do you take it to the next and grow to a much larger, organic API driven ecosystem where a long tail of genealogy businesses, professionals, and enthusiasts can build on, and contribute to, the FamilySearch platform.

FamilySearch knows the time has come to make a shift to being an open platform, but is not entirely sure what needs to happen to actually stimulate not just the core FamilySearch partners, but also establish a vibrant long tail of developers. A developer portal is not just a place where geeky coders come to find what they need, it is a hub where business development occurs at all levels, in both synchronous, and asynchronous ways, in a 24/7 global environment.

FamilySearch acknowledge they have some issues when it comes investing in API driven partnerships:

  • “Platform” means their core set of large partners
  • Not treating all partners like first class citizens
  • Competing with some of their partners
  • Don’t use their own API, creating a gap in perspective

FamilySearch knows if they can work out the right configuration, they can evolve FamilySearch from a digital genealogical web and mobile service to a genealogical platform. If they do this they can scale beyond what they’ve been able to do with a core set of partners, and crowdsource the mapping of the global family tree, allowing individuals to map their own family trees, while also contributing to the larger global tree. With a proper API driven platform this process doesn’t have to occur via the FamiliySearch website and mobile app, it can happen in any web, desktop, or mobile application anywhere.

FamilySearch already has a pretty solid development team taking care of the tech of the FamilySearch API, and they have 20 people working internally to support partners. They have a handle on the tech of their API, they just need to get a handle on the business and politics of their API, and invest in the resources that needed to help scale the FamilySearch API being just a developer area, to being a growing genealogical developer community, to a full blow ecosystem that span not just the FamilySearch developer portal, but thousands of other sites and applications around the globe.

A Good Dose Of API Evangelism To Shift Culture A Bit

A healthy API evangelism strategy brings together a mix of business, marketing, sales and technology disciplines into a new approach to doing business for FamilySearch, something that if done right, can open up FamilySearch to outside ideas, and with the right framework manage to allow the platform to move beyond just certification, and partnering to also investment, and acquisition of data, content, talent, applications, and partners via the FamilySearch developer platform.

Think of evangelism as the grease in the gears of the platform allowing it to grow, expand, and handle a larger volume, of outreach, and support. API evangelism works to lubricate all aspects of platform operation.

First, lets kick off with setting some objectives for why we are doing this, what are we trying to accomplish:

  • Increase Number of Records - Increase the number of overall records in the FamilySearch database, contributing the larger goals of mapping the global family tree.
  • Growth in New Users - Growing the number of new users who are building on the FamilySearch API, increase the overall headcount fro the platform.
  • Growth In Active Apps - Increase not just new users but the number of actual apps being built and used, not just counting people kicking the tires.
  • Growth in Existing User API Usage - Increase how existing users are putting the FamilySearch APIs. Educate about new features, increase adoption.
  • Brand Awareness - One of the top reasons for designing, deploying and managing an active APIs is increase awareness of the FamilySearch brand.
  • What else?

What does developer engagement look like for the FamilySearch platform?

  • Active User Engagement - How do we reach out to existing, active users and find out what they need, and how do we profile them and continue to understand who they are and what they need. Is there a direct line to the CRM?
  • Fresh Engagement - How is FamilySearch contacting new developers who have registered weekly to see what their immediate needs are, while their registration is fresh in their minds.
  • Historical Engagement - How are historical active and / or inactive developers being engaged to better understand what their needs are and would make them active or increase activity.
  • Social Engagement - Is FamilySearch profiling the URL, Twitter, Facebook LinkedIn, and Github profiles, and then actively engage via these active channels?

Establish a Developer Focused Blog For Storytelling

  • Projects - There are over 390 active projects on the FamilySearch platform, plus any number of active web, desktop, and mobile applications. All of this activity should be regularly profiled as part of platform evangelism. An editorial assembly line of technical projects that can feed blog stories, how-tos, samples and Github code libraries should be taking place, establishing a large volume of exhaust via the FamlySearch platform.
  • Stories - FamilySearch is great at writing public, and partner facing content, but there is a need to be writing, editing and posting of stories derived from the technically focused projects, with SEO and API support by design.
  • Syndication - Syndication to Tumblr, Blogger, Medium and other relevant blogging sites on regular basis with the best of the content.

Mapping Out The Geneology Landscape

  • Competition Monitoring - Evaluation of regular activity of competitors via their blog, Twitter, Github and beyond.
  • Alpha Players - Who are the vocal people in the genealogy space with active Twitter, blogs, and Github accounts.
  • Top Apps - What are the top applications in the space, whether built on the FamilySearch platform or not, and what do they do?
  • Social - Mapping the social landscape for genealogy, who is who, and who should the platform be working with.
  • Keywords - Established a list of keywords to use when searching for topics at search engines, QA, forums, social bookmarking and social networks. (should already be done by marketing folks)
  • Cities & Regions - Target specific markets in cities that make sense to your evangelism strategy, what are local tech meet ups, what are the local organizations, schools, and other gatherings. Who are the tech ambassadors for FamilySearch in these spaces?

Adding To Feedback Loop From Forum Operations

  • Stories - Deriving of stories for blog derived from forum activity, and the actual needs of developers.
  • FAQ Feed - Is this being updated regularly with stuff?
  • Streams - other stream giving the platform a heartbeat?

Being Social About Platform Code and Operations With Github

  • Setup Github Account - Setup FamilySearch platform developer account and bring internal development team into a team umbrella as part of.
  • Github Relationships - Managing of followers, forks, downloads and other potential relationships via Github, which has grown beyond just code, and is social.
  • Github Repositories - Managing of code sample Gists, official code libraries and any samples, starter kits or other code samples generated through projects.

Adding To The Feedback Loop From The Bigger FAQ Picture

  • Quora - Regular trolling of Quora and responding to relevant [Client Name] or industry related questions.
  • Stack Exchange - Regular trolling of Stack Exchange / Stack Overflow and responding to relevant FamilySearch or industry related questions.
  • FAQ - Add questions from the bigger FAQ picture to the local FamilySearch FAQ for referencing locally.

Leverage Social Engagement And Bring In Developers Too

  • Facebook - Consider setting up of new API specific Facebook company. Posting of all API evangelism activities and management of friends.
  • Google Plus - Consider setting up of new API specific Google+ company. Posting of all API evangelism activities and management of friends.
  • LinkedIn - Consider setting up of new API specific LinkedIn profile page who will follow developers and other relevant users for engagement. Posting of all API evangelism activities.
  • Twitter - Consider setting up of new API specific Twitter account. Tweeting of all API evangelism activity, relevant industry landscape activity, discover new followers and engage with followers.

Sharing Bookmarks With the Social Space

  • Hacker News - Social bookmarking of all relevant API evangelism activities as well as relevant industry landscape topics to Hacker News, to keep a fair and balanced profile, as well as network and user engagement.
  • Product Hunt - Product Hunt is a place to share the latest tech creations, providing an excellent format for API providers to share details about their new API offerings.
  • Reddit - Social bookmarking of all relevant API evangelism activities as well as relevant industry landscape topics to Reddit, to keep a fair and balanced profile, as well as network and user engagement.

Communicate Where The Roadmap Is Going

  • Roadmap - Provide regular roadmap feedback based upon developer outreach and feedback.
  • Changelog - Make sure the change log always reflects the roadmap communication or there could be backlash.

Establish A Presence At Events

  • Conferences - What are the top conferences occurring that we can participate in or attend--pay attention to call for papers of relevant industry events.
  • Hackathons - What hackathons are coming up in 30, 90, 120 days? Which would should be sponsored, attended, etc.
  • Meetups - What are the best meetups in target cities? Are there different formats that would best meet our goals? Are there any sponsorship or speaking opportunities?
  • Family History Centers - Are there local opportunities for the platform to hold training, workshops and other events at Family History Centers?
  • Learning Centers - Are there local opportunities for the platform to hold training, workshops and other events at Learning Centers?

Measuring All Platform Efforts

  • Activity By Group - Summary and highlights from weekly activity within the each area of API evangelism strategy.
  • New Registrations - Historical and weekly accounting of new developer registrations across APis.
  • Volume of Calls - Historical and weekly accounting of API calls per API.
  • Number of Apps - How many applications are there.

Essential Internal Evangelism Activities

  • Storytelling - Telling stories of an API isn’t just something you do externally, what stories need to be told internally to make sure an API initiative is successful.
  • Conversations - Incite internal conversations about the FamilySearch platform. Hold brown bag lunches if you need to, or internal hackathons to get them involved.
  • Participation - It is very healthy to include other people from across the company in API operations. How can we include people from other teams in API evangelism efforts. Bring them to events, conferences and potentially expose them to local, platform focused events.
  • Reporting - Sometimes providing regular numbers and reports to key players internally can help keep operations running smooth. What reports can we produce? Make them meaningful.

All of this evangelism starts with a very external focus, which is a hallmark of API and developer evangelism efforts, but if you notice by the end we are bringing it home to the most important aspect of platform evangelism, the internal outreach. This is the number one reason APIs fail, is due to a lack of internal evangelism, educating top and mid-level management, as well as lower level staff, getting buy-in and direct hands-on involvement with the platform, and failing to justify budget costs for the resources needed to make a platform successful.

Top-Down Change At FamilySearch

The change FamilySearch is looking for already has top level management buy-in, the problem is that the vision is not in lock step sync with actual platform operations. When regular projects developed via the FamilySearch platform are regularly showcased to top level executives, and stories consistent with platform operations are told, management will echo what is actually happening via the FamilySearch. This will provide a much more ongoing, deeper message for the rest of the company, and partners around what the priorities of the platform are, making it not just a meaningless top down mandate.

An example of this in action is with the recent mandate from President Obama, that all federal agencies should go “machine readable by default”, which includes using APIs and open data outputs like JSON, instead of document formats like PDF. This top down mandate makes for a good PR soundbite, but in reality has little affect on the ground at federal agencies. In reality it has taken two years of hard work on the ground, at each agency, between agencies, and with the public to even begin to make this mandate a truth at over 20 of the federal government agencies.

Top down change is a piece of the overall platform evolution at FamilySearch, but is only a piece. Without proper bottom-up, and outside-in change, FamilySearch will never evolve beyond just being a genealogical software as a service with an interesting API. It takes much more than leadership to make a platform.

Bottom-Up Change At FamilySearch

One of the most influential aspects of APIs I have seen at companies, institutions, and agencies is the change of culture brought when APIs move beyond just a technical IT effort, and become about making resources available across an organization, and enabling people to do their job better. Without an awareness, buy-in, and in some cases evangelist conversion, a large organization will not be able to move from a service orientation to a platform way of thinking.

If a company as a whole is unaware of APIs, either at the company or organization, as well as out in the larger world with popular platforms like Twitter, Instagram, and others—it is extremely unlikely they will endorse, let alone participate in moving from being a digital service to platform. Employees need to see the benefits of a platform to their everyday job, and their involvement cannot require what they would perceive as extra work to accomplish platform related duties. FamilySearch employees need to see the benefits the platform brings to the overall mission, and play a role in this happening—even if it originates from a top-down mandate.

Top bookseller Amazon was already on the path to being a platform with their set of commerce APIs, when after a top down mandate from CEO Jeff Bezos, Amazon internalized APIs in such a way, that the entire company interacted, and exchange resources using web APIs, resulting in one of the most successful API platforms—Amazon Web Services (AWS). Bezos mandated that if an Amazon department needed to procure a resource from another department, like server or storage space from IT, it need to happen via APIs. This wasn’t a meaningless top-down mandate, it made employees life easier, and ultimately made the entire company more nimble, and agile, while also saving time and money. Without buy-in, and execution from Amazon employees, what we know as the cloud would never have occurred.

Change at large enterprises, organizations, institutions and agencies, can be expedited with the right top-down leadership, but without the right platform evangelism strategy, that includes internal stakeholders as not just targets of outreach efforts, but also inclusion in operations, it can result in sweeping, transformational changes. This type of change at a single organization can effect how an entire industry operates, similar to what we’ve seen from the ultimate API platform pioneer, Amazon.

Outside-In Change At FamilySearch

The final layer of change that needs to occur to bring FamilySearch from being just a service to a true platform, is opening up the channels to outside influence when it comes not just to platform operations, but organizational operations as well. The bar is high at FamilySearch. The quality of services, and expectation of the process, and adherence to the mission is strong, but if you are truly dedicated to providing a database of all mankind, you are going to have to let mankind in a little bit.

FamilySearch is still the keeper of knowledge, but to become a platform you have to let in the possibility that outside ideas, process, and applications can bring value to the organization, as well as to the wider genealogical community. You have to evolve beyond notions that the best ideas from inside the organization, and just from the leading partners in the space. There are opportunities for innovation and transformation in the long-tail stream, but you have to have a platform setup to encourage, participate in, and be able to identify value in the long-tail stream of an API platform.

Twitter is one of the best examples of how any platform will have to let in outside ideas, applications, companies, and individuals. Much of what we consider as Twitter today was built in the platform ecosystem from the iPhone and Android apps, to the desktop app TweetDeck, to terminology like the #hashtag. Over the last 5 years, Twitter has worked hard to find the optimal platform balance, regarding how they educate, communicate, invest, acquire, and incentives their platform ecosystem. Listening to outside ideas goes well beyond the fact that Twitter is a publicly available social platform, it is about having such a large platform of API developers, and it is impossible to let in all ideas, but through a sophisticated evangelism strategy of in-person, and online channels, in 2014 Twitter has managed to find a balance that is working well.

Having a public facing platform doesn’t mean the flood gates are open for ideas, and thoughts to just flow in, this is where service composition, and the certification and partner framework for FamilySearch will come in. Through clear, transparent partners tiers, open and transparent operations and communications, an optimal flow of outside ideas, applications, companies and individuals can be established—enabling a healthy, sustainable amount of change from the outside world.

Knowing All Of Your Platform Partners

The hallmark of any mature online platform is a well established partner ecosystem. If you’ve made the transition from service to platform, you’ve established a pretty robust approach to not just certifying, and on boarding your partners, you also have stepped it up in knowing and understanding who they are, what their needs are, and investing in them throughout the lifecycle.

First off, profile everyone who comes through the front door of the platform. If they sign up for a public API key, who are they, and where do they potentially fit into your overall strategy. Don’t be pushy, but understanding who they are and what they might be looking for, and make sure you have a track for this type of user well defined.

Next, quality, and certify as you have been doing. Make sure the process is well documented, but also transparent, allowing companies and individuals to quickly understand what it will take to certified, what the benefits are, and examples of other partners who have achieved this status. As a developer, building a genealogical mobile app, I need to know what I can expect, and have some incentive for investing in the certification process.

Keep your friends close, and your competition closer. Open the door wide for your competition to become a platform user, and potentially partner. 100+ year old technology company Johnson Controls (JCI) was concerned about what the competition might do it they opened up their building efficient data resources to the public via the Panoptix API platform, when after it was launched, they realized their competition were now their customer, and a partner in this new approach to doing business online for JCI.

When Department of Energy decides what data and other resource it makes available via Data.gov or the agencies developer program it has to deeply consider how this could affect U.S. industries. The resources the federal agency possesses can be pretty high value, and huge benefits for the private sector, but in some cases how might opening up APIs, or limiting access to APIs help or hurt the larger economy, as well as the Department of Energy developer ecosystem—there are lots of considerations when opening up API resources, that vary from industry to industry.

There are no silver bullets when it comes to API design, deployment, management, and evangelism. It takes a lot of hard work, communication, and iterating before you strike the right balance of operations, and every business sector will be different. Without knowing who your platform users are, and being able to establish a clear and transparent road for them to follow to achieve partner status, FamilySearch will never elevate to a true platform. How can you scale the trusted layers of your platform, if your partner framework isn’t well documented, open, transparent, and well executed? It just can’t be done.

Meaningful Monetization For Platform

All of this will take money to make happen. Designing, and executing on the technical, and the evangelism aspects I’m laying out will cost a lot of money, and on the consumers side, it will take money to design, develop, and manage desktop, web, and mobile applications build around the FamilySearch platform. How will both the FamilySearch platform, and its participants make ends meet?

This conversation is a hard one for startups, and established businesses, let alone when you are a non-profit, mission driven organization. Internal developers cost money, server and bandwidth are getting cheaper but still are a significant platform cost--sustaining a sale, bizdev, and evangelism also will not be cheap. It takes money to properly deliver resources via APIs, and even if the lowest tiers of access are free, at some point consumers are going to have to pay for access, resources, and advanced features.

The conversation around how do you monetize API driven resources is going on across government, from cities up to the federal government. Where the thought of charging for access to public data is unheard of. These are public assets, and they should be freely available. While this is true, think of the same situation, but when it comes to physical public assets that are owned by the government, like parks. You can freely enjoy many city, county, and federal parks, there are sometimes small fees for usage, but if you want to actually sell something in a public park, you will need to buy permits, and often share revenue with the managing agency. We have to think critically about how we fund the publishing, and refinement of publicly owned digital assets, as with physical assets there will be much debate in coming years, around what is acceptable, and what is not.

Woven into the tiers of partner access, there should always be provisions for applying costs, overhead, and even generation of a little revenue to be applied in other ways. With great power, comes great responsibility, and along with great access for FamilySearch partners, many will also be required to cover costs of compute capacity, storage costs, and other hard facts of delivering a scalable platform around any valuable digital assets, whether its privately or publicly held.

Platform monetization doesn’t end with covering the costs of platform operation. Consumers of FamilySearch APIs will need assistance in identify the best ways to cover their own costs as well. Running a successful desktop, web or mobile application will take discipline, structure, and the ability to manage overhead costs, while also being able to generate some revenue through a clear business model. As a platform, FamilySearch will have to bring to the table some monetization opportunities for consumers, providing guidance as part of the certification process regarding what are best practices for monetization, and even some direct opportunities for advertising, in-app purchases and other common approaches to application monetization and sustainment.

Without revenue greasing the gears, no service can achieve platform status. As with all other aspects of platform operations the conversation around monetization cannot be on-sided, and just about the needs of the platform providers. Pro-active steps need to be taken to ensure both the platform provider, and its consumers are being monetized in the healthiest way possible, bringing as much benefit to the overall platform community as possible.

Open & Transparent Operations & Communications

How does all of this talk of platform and evangelism actually happen? It takes a whole lot of open, transparent communication across the board. Right now the only active part of the platform is the FamilySearch Developer Google Group, beyond that you don’t see any activity that is platform specific. There are active Twitter, Facebook, Google+, and mainstream and affiliate focused blogs, but nothing that serves the platform, contributed to the feedback loop that will be necessary to take the service to the next level.

On a public platform, communications cannot all be private emails, phone calls, or face to face meetings. One of the things that allows an online service to expand to become a platform, then scale and grow into robust, vibrant, and active community is a stream of public communications, which include blogs, forums, social streams, images, and video content. These communication channels cannot all be one way, meaning they need to include forum and social conversations, as well as showcase platform activity by API consumers.

Platform communications isn’t just about getting direct messages answered, it is about public conversation so everyone shares in the answer, and public storytelling to help guide and lead the platform, that together with support via multiple channels, establishes a feedback loop, that when done right will keep growing, expanding and driving healthy growth. The transparent nature of platform feedback loops are essential to providing everything the consumers will need, while also bringing a fresh flow of ideas, and insight within the FamilySearch firewall.

Truly Shifting FamilySearch The Culture

Top-down, bottom-up, outside-in, with constantly flow of oxygen via vibrant, flowing feedback loop, and the nourishing, and sanitizing sunlight of platform transparency, where week by week, month by month someone change can occur. It won’t all be good, there are plenty of problems that arise in ecosystem operations, but all of this has the potential to slowly shift culture when done right.

One thing that shows me the team over at FamilySearch has what it takes, is when I asked if I could write this up a story, rather than just a proposal I email them, they said yes. This is a true test of whether or not an organization might have what it takes. If you are unwilling to be transparent about the problems you have currently, and the work that goes into your strategy, it is unlikely you will have what it takes to establish the amount of transparency required for a platform to be successful.

When internal staff, large external partners, and long tail genealogical app developers and enthusiasts are in sync via a FamilySearch platform driven ecosystem, I think we can consider a shift to platform has occurred for FamilySearch. The real question is how do we get there?

Executing On Evangelism

This is not a definitive proposal for executing on an API evangelism strategy, merely a blueprint for the seed that can be used to start a slow, seismic shift in how FamilySearch engages its API area, in a way that will slowly evolve it into a community, one that includes internal, partner, and public developers, and some day, with the right set of circumstances, FamilySearch could grow into robust, social, genealogical ecosystem where everyone comes to access, and participate in the mapping of mankind.

  • Defining Current Platform - Where are we now? In detail.
  • Mapping the Landscape - What does the world of genealogy look like?
  • Identifying Projects - What are the existing projects being developed via the platform?
  • Define an API Evangelist Strategy - Actually flushing out of a detailed strategy.
    • Projects
    • Storytelling
    • Syndication
    • Social
    • Channels
      • External Public
      • External Partner
      • Internal Stakeholder
      • Internal Company-Wide
  • Identify Resources - What resource currently exist? What are needed?
    • Evangelist
    • Content / Storytelling
    • Development
  • Execute - What does execution of an API evangelist strategy look like?
  • Iterate - What does iteration look like for an API evangelism strategy.
    • Weekly
    • Review
    • Repeat

AS with many providers, you don’t want to this to take 5 years, so how do you take a 3-5 year cycle, and execute in 12-18 months?

  • Invest In Evangelist Resources - It takes a team of evangelists to build a platform
    • External Facing
    • Partner Facing
    • Internal Facing
  • Development Resources - We need to step up the number of resources available for platform integration.
    • Code Samples & SDKs
    • Embeddable Tools
  • Content Resources - A steady stream of content should be flowing out of the platform, and syndicated everywhere.
    • Short Form (Blog)
    • Long Form (White Paper & Case Study)
  • Event Budget - FamilySearch needs to be everywhere, so people know that it exists. It can’t just be online.
    • Meetups
    • Hackathons
    • Conferences

There is nothing easy about this. It takes time, and resources, and there are only so many elements you can automate when it comes to API evangelism. For something that is very programmatic, it takes more of the human variable to make the API driven platform algorithm work. With that said it is possible to scale some aspects, and increase the awareness, presence, and effectiveness of FamilySearch platform efforts, which is really what is currently missing.

While as the API Evangelist, I cannot personally execute on every aspect of an API evangelism strategy for FamilySearch, I can provide essential planning expertise for the overall FamilySearch API strategy, as well as provide regular checkin with the team on how things are going, and help plan the roadmap. The two things I can bring to the table that are reflected in this proposal, is the understanding of where the FamilySearch API effort currently is, and what is missing to help get FamilySearch to the next stage of its platform evolution.

When operating within the corporate or organizational silo, it can be very easy to lose site of how other organizations, and companies, are approach their API strategy, and miss important pieces of how you need to shift your strategy. This is one of the biggest inhibitors of API efforts at large organizations, and is one of the biggest imperatives for companies to invest in their API strategy, and begin the process of breaking operations out of their silo.

What FamilySearch is facing demonstrates that APIs are much more than the technical endpoint that most believe, it takes many other business, and political building blocks to truly go from API to platform.


Low Hanging Fruit For API Discovery In The Federal Government

I looked through 77 of the developer areas for federal agencies, resulting in reviewing approximately 190 APIs. While the presentation of 95% of the federal government developer portals are crap, it makes me happy that about 120 of the 190 APIs (over 60%) are actually consumable web APIs, that didn't make me hold my nose and run out of the API area. 

Of the 190, only 13 actually made me happy for one reason or another:

Don't get me wrong, there are other nice implementations in there. I like the simplicity and consistency in APIs coming out of GSA, SBA, but overall federal APIs reflect what I see a lot in the private sector, some developer making a decent API, but their follow-through and launch severeley lacks what it takes to make the API successful. People wonder why nobody uses their APIs? hmmmmm....

A little minimalist simplicity in a developer portal, simple explanation of what an API does, interactive documentation w/ Swagger, code libraries, terms of service (TOS), wouild go a looooooooooooong way in making sure these government resources were found, and put to use. 

Ok, so where the hell do I start? Let's look through theses 123 APIs and see where the real low hanging fruit for demonstrating the potential of APIs.json, when it comes to API discovery in the federal government.

Let's start again with the White House (http://www.whitehouse.gov/developers):

Only one API made it out of the USDA:

Department of Commerce (http://www.commerce.gov/developer):

  • Census Bureau API - http://www.census.gov/developers/ - Yes, a real developer area with supporting building blocks. (Update, News,( App Gallery, Forum, Mailing List). Really could use interactive document though. There are urls, but not active calls. Would be way easier if you could play with data, before committing. (B)
  • Severe Weather Data Inventory - http://www.ncdc.noaa.gov/swdiws/ - Fairly basic interface, wouldn’t take much to turn into modern web API. Right now its just a text file, with a spec style documentation explaining what to do. Looks high value. (B)
  • National Climatic Data Center Climate Data Online Web Services - http://www.ncdc.noaa.gov/cdo-web/webservices/v2Oh yeah, now we are talking. That is an API. No interactive docs, but nice clean ones, and would be some work, but could be done. (A)
  • Environmental Research Division's Data Access Program - http://coastwatch.pfeg.noaa.gov/erddap/rest.html - Looks like a decent web API. Wouldn’t be too much to generate a machine readable definition and make into a better API area. (B)
  • Space Physics Interactive Data Resource Web Services - http://spidr.ngdc.noaa.gov/spidr/docs/SPIDR.REST.WSGuide.en.pdf - Well its a PDF, but looks like a decent web API. It would be some work but could turn into a decide API with Swagger specs. (B)
  • Center for Operational Oceanographic Products and Services - http://tidesandcurrents.noaa.gov/api/ - Fairly straightforward API, Simple. Wouldn’t be hard to generate interactive docs for it. Spec needed. (B)

Arlington Cemetary:

Department of Education:

  • Department of Education - http://www.ed.gov/developers - Lots of high value datasets. Says API, but is JSON file. Wouldn’t be hard to generate APIs for it all and make machine readable definitions. (B)

Energy:

  • Energy Information Administration - http://www.eia.gov/developer/ - Nice web API, simple clean presentation. Needs interactive docs. (B)
  • National Renewable Energy Laboratory - http://developer.nrel.gov/ - Close to a modern Developer area with web APIs. Uses standardized access (umbrella). Some of them have Swagger specs, the rest would be easy to create. (A)
  • Office of Scientific and Technical Information - http://www.osti.gov/XMLServices - Interfaces are pretty well designed, and Swagger specs would be straightforward. But docs are all PDF currently. (B)

Department of Health and Human Services (http://www.hhs.gov/developer):

Food and Drug Administration (http://open.fda.gov):

Department of Homeland Security (http://www.dhs.gov/developer):

Two losse cannons:

 Department of Interior (http://www.doi.gov/developer):

Department of Justice (http://www.justice.gov/developer):

Labor:

  • Department of Labor - http://developer.dol.gov/ - I love their developer area. They have a great API, easy to generate API definitions. (A)
  • Bureau of Labor Statistics - http://www.bls.gov/developers/ - Web APIs in there. Complex, and lots of work, but can be done. API Definitions Needed. (B)

Department of State (http://www.state.gov/developer):

Department of Transportation (http://www.dot.gov/developer):

Department of the Treasury (http://www.treasury.gov/developer):

Veterans Affairs (http://www.va.gov/developer):

Consumer Finance Protectection Bureau:

Federal Communications Commission (http://www.fcc.gov/developers):

Lone bank:

  • Federal Reserve Bank of St. Louis - http://api.stlouisfed.org/ - Good API and area, would be easy to generate API definitions. (B)

General Services Administration (http://www.gsa.gov/developers/):

National Aeronautics and Space Administration http://open.nasa.gov/developer:

Couple more loose cannons:

Recovery Accountability and Transparency Board (http://www.recovery.gov/arra/FAQ/Developer/Pages/default.aspx):

Small Business Administration (http://www.sba.gov/about-sba/sba_performance/sba_data_store/web_service_api):

Last but not least.

That is a lot of potentially valuable API resource to consume. From my perspective, I think that what has come out of GSA, SBA, and White House Petition API, represent probably the simplest, most consistent, and high value targets for me. Next maybe the wealth of APis out of Interior and FDA. AFter that I'll cherry pick from the list, and see which are easiest. 

I'm lookig to create a Swagger definition for each these APIs, and publish as a Github repository, allowing people to play with the API. If I have to, I'll create a proxy for each one, because CORS is not common across the federal government. I'm hoping to not spend to much time on proxies, because once I get in there I always want to improve the interface, and evolve a facade for each API, and I don't have that much time on my hands.


My Response To How Can the Department of Education Increase Innovation, Transparency and Access to Data?

I spent considerable time going through the Department of Education RFI, answering each question in as much detail as I possibly could. You can find my full response below. In the end I felt I could provide more value by summarizing my response, eliminating much of the redundancy across different sections of the RFI, and just cut through the bureaucracy as I (and APIs) prefer to do.

Open Data By Default
All publicly available data at the Department of Education needs to be open by default. This is not just a mandate, this is a way of life. There is no data that is available on any Department of Education websites that should not be available for data download. Open data downloads are not separate from existing website efforts at Department of Education, they are the other side of the coin, making the same content and data available in machine readable formats, rather than available via HTML—allowing valuable resources to be used in systems and applications outside of the department’s control.

Open API When There Are Resources
The answer to whether or not the Department of Education should provide APIs is the same as whether or not the agency should deploy websites—YES! Not all individuals and companies will have the resources to download, process, and put downloadable resources to use. In these situations APIs can provide much easier access to open data resources, and when open data resources are exposed as APIs it opens up access to a much wider audience, even non-developers. Lightweight, simple, API access to open data inventory should be default along with data downloads when resources are available. This approach to APIs by default, will act as the training ground for not just 3rd party developers, but also internally, allowing Department of Education staff to learn how to manage APIs in a safe, read-only environment.

Using A Modern API Design, Deployment, and Management Approach
As the usage of the Internet matured in 2000, many leading technology providers like SalesForce and Amazon began using web APIs to make digital assets available to 3rd party partners, and 14 years later there are some very proven approaches to designing, deploying and management APIs. API management is not a new and bleeding edge approach to making assets available in the private sector, there are numerous API tools and services available, and this has begun to extend to the government sector with tools like API Umbrella from NREL, being employed by api.data.gov and other agencies, as well as other tools and services being delivered by 18F from GSA. There are many proven blueprints for the Department of Education to follow when embarking on a complete API strategy across the agency, allowing innovation to occur around specific open data, and other program initiatives, in a safe, proven way.

Use API Service Composition For Maximum Access & Control
One benefit of 14 years of evolution around API design, deployment, and management is the establishment of sophisticated service composition of API resources. Service composition refers to the granular, modular design and deployment of APIs, while being able to manage who has access to these resources. Modern API access is not just direct, public access to a database. API service composition allows for designing exactly the access to resources that is necessary, one that is in alignment with business objectives, while protecting the privacy and security of everyone involved. Additionally service composition allows for real-time awareness of how all data, content, and other resources at the Department of Education are accessed and put to use, allowing new APIs to be designed to support specific needs, and existing APIs to evolved based upon actual demand, not just speculation.

Deeper Understanding Of How Resources Are Used
A modern API service composition layer opens up possibility for a new analytics layer that is not just about measuring and reporting of access to APIs, it is about understanding precisely how resources are accessed in real-time, allowing API design, deployment and management processes to be adjusted in a more rapid and iterative way, that contributes to the roadmap, while providing the maximum enforcement of security and privacy of everyone involved. When the Department of Education internalizes a healthy, agency-wide API approach, a new real-time understanding will replace this very RFI centered process that we are participating in, allowing for a new agility, with more control and flexibility than current approaches. A RFI cycle takes months, and will contain a great deal of speculation about what would be, where API access, coupled with healthy analytics and feedback loops, answers all the questions being addressed in this RFI, in real-time, reducing resource costs, and wasted cycles.

APIs Open Up Synchronous and Asynchronous Communication Channels
Open data downloads represents a broadcast approach to making Department of Education content, data and other resources available, representing a one way street. APIs provide a two-way communication, bringing external partners and vendors closer to Department of Education, while opening up feedback loops with the Department of Education, reducing the distance between the agency and its private sector partners—potentially bringing valuable services closer to students, parents and the companies or institutions that serve them. Feedback loops are much wider currently at the Department of Education occur on annual, monthly and at the speed of email or phone calls , with the closest being in person at events, something that can be a very expensive endeavor. Web APIs provide a real-time, synchronous and asynchronous communication layer that will improve the quality of service between Department of Education and the public, for a much lower cost than traditional approaches.

Building External Ecosystem of Partners
The availability of high value API resources, coupled with a modern approach to API design, deployment and management, an ecosystem of trusted partners can be established, allowing the Department of Education to share the workload with an external partner ecosystem. API service composition allows the agency to open up access to resources to only the partners who have proven they will respect the privacy and security of resources, and be dedicated to augmenting and helping extend the mission of the Department of Education. As referenced in the RFI, think about the ecosystem established by the IRS modernized e-file system, and how the H&R Blocks, and Jackson Hewitt’s of the world help the IRS share the burden of the country's tax system. Where is the trusted ecosystem for the Department of Education? The IRS ecosystem has been in development for over 25 years, something the Department of Education has to get to work on theirs now.

Security Fits In With Existing Website Security Practices
One of the greatest benefits of web APIs is that they utilize existing web technologies that are employed to deploy and manage websites. You don’t need additional security approaches to manage APIs beyond existing websites. Modern web APIs are built on HTTP, just like websites, and security can be addressed right alongside current website security practices—instead of delivering HTML, APIs are delivering JSON and XML. APIs even go further, and by using modern API service composition practices, the Department of Education gains an added layer of security and control, which introduces granular levels of access to all resource, something that does not exist for website. With a sensible analytics layer, API security isn’t just about locking down, it is about understanding who is access resources, how they are using them, striking a balance between the security and access of resources, which is the hallmark of APIs.

oAuth Gives Identity and Access Control To The Student
Beyond basic web security, and the heightened level of control modern API management deliver, there is a 3rd layer to the security and privacy layer of APis that does not exist anywhere else—oAuth. Open Authentication or oAuth provides and identity and access layer on top of API that gives end-users, and owner of personal data control over who access their data. Technology leaders in the private sector are all using oAuth to give platform users control over how their data is used in applications and systems. oAuth is the heartbeat of API security, giving API platforms a way to manage security, and how 3rd party developers access and put resources to use, in a way that gives control to end users. In the case of the Department of Education APIs, this means putting the parent and student at the center of who accesses, and uses their personal data, something that is essential to the future of the Department of Education.

How Will Policy Be Changed?
I'm not a policy wonk, nor will I ever be one. One thing I do know is you will never understand the policy implications in one RFI, nor will you change policy to allow for API innovation in one broad stroke--you will fail. Policy will have to be changed incrementally, a process that fits nicely with the iterative, evolutionary life cyce of API managment. The cultural change at Department of Education, as well as evolutionary policy change at the federal level will be the biggest benefits of APIs at the Department of Education. 

An Active API Platform At Department of Education Would Deliver What This RFI Is Looking For
I know it is hard for the Department of Education to see APIs as something more than a technical implementation, and you want to know, understand and plan everything ahead of time—this is baked into the risk averse DNA of government.  Even with this understanding, as I go through the RFI, I can’t help but be frustrated by the redundancy, bureaucracy, over planning, and waste that is present in this process. An active API platform would answer every one of your questions you pose, with much more precision than any RFI can ever deliver.

If the Department of Education had already begun evolving an API platform for all open data sets currently available on data.gov, the agency would have the experience in API design, deployment and management to address 60% of the concerns posed by this RFI. Additionally the agency would be receiving feedback from existing integrators about what they need, who they are, and what they are building to better serve students and institutions. Because this does not exist there will be much speculation about who will use Department of Education APIs, and how they will use them and better serve students. While much of this feedback will be well meaning, it will not be rooted in actual use cases, applications and existing implementations. An active API ecosystem answers these questions, while keeping answers rooted in actual integrations, centered around specific resources, and actual next steps for real world applications.

The learning that occurs from managing read-only API access, to low-level data, content and resources would provide the education and iteration necessary for the key staff at Department of Education to reach the next level, which would be read / write APIs, complete with oAuth level security, which would be the holy grail in serving students and achieving the mission of the Department of Education. I know I’m biased, because of my focus on APIs, but read / write access to all Department of Education resources over the web and via mobile devices, that gives full control to students, is the future of the agency. There is no "should we do APIs", there is only the how, and I’m afraid we are wasting time, and we need to just do it, and learn to ask these questions along the way.

There is proven technology and processes available to make all Department of Education data, content and resources available, allowing both read and write access in a secure way, that is centered around the student. The private sector is 14 years ahead of the government in delivering private sector resources in this way, and other government agencies are ahead of the Department of Education in doing this as well, but there is an opportunity for the agency to still lead and take action, by committing the resources necessary to not just deploy a single API, but internalize APIs in a way that will change the way learning occurs in the coming decades across all US institutions.


A. Information Gaps and Needs in Accessing Current Data and Aid Programs

1. How could data sets that are already publicly available be made more accessible using APIs? Are there specific data sets that are already available that would be most likely to inform consumer choice about college affordability and performance?

Not everyone has the resources download, process and put open datasets to use. APIs can make all of the publicly available datasets more available to the public, allowing for easy URL access, deployment of widgets, visualizations as well as integration with existing tools like Microsoft Excel. All datasets should have option of being published in this way, but ultimately the Dept. of Ed API ecosystem should speak to which datasets would be most high value, and warrant API access.

2. How could APIs help people with successfully and accurately completing forms associated with any of the following processes: FAFSA; Master Promissory Note; Loan Consolidation; entrance and exit counseling; Income-Driven Repayment (IDR) programs, 15 such as Pay As You Earn; and the Public Student Loan Forgiveness program?

APIs will help decouple each data point on a form. Introductory information, each questions, and other supporting resources can be broken up and delivered via any website, and mobile applications. Evolving a form into a linear, 2-dimensional form into an interactive application that people can engage with, providing the assistance needed to properly achieve the goals surrounding a form.

Each form initiative will have its own needs, and a consistent API platform and strategy from the department of Education will help identify each forms unique requirements, and the custom delivery of just the resources that are needed for a forms target audience.

3. What gaps are there with loan counseling and financial literacy and awareness that could be addressed through the use of APIs to provide access to government resources and content?

First, APIs can provide access to the content that educates students about the path they are about to embark on, before they do, via web and mobile apps they frequent already, not being required to visit the source site and learn. Putting the information students need into their hands, via their mobile devices will increase the reach of content and increase the chances that students will consume.

Second, APIs plus oAuth will give students access over their own educational finances, forcing them to better consider how they will manage all the relationships they enter into, the details of loans, grants and with the schools they attend. With more control over data and content, will come a forced responsibility in understanding and managing their finances.

Third, this process will open up students eyes to the wider world of online data and information, and that APIs are driving all aspects of their financial life from their banking and credit cards to managing their online credit score.

APIs are at the heart of all of the API driven digital economy, the gift that would be given to students when they first leave home, in the form of API literacy would carry with them throughout their lives, allowing them to better manage all aspects of their online and financial lives—and the Department of Education gave them that start.

4. What services that are currently provided by title IV student loan servicers could be enhanced through APIs (e.g., deferment, forbearance, forgiveness, cancellation, discharge, payments)?

A consistent API platform and strategy from the department of Education would provide the evolution of a suite of verified partners, such as title IV student loan services. A well planned partner layer within an ecosystem would allow student loan services to access data from students in real-time, with students having a say in who and how they have access to the data. These dynamics introduced by, and unique to API platforms that employ oAuth, provide new opportunities for partnerships to be established, evolve and even be terminated when not going well.

API platform using oAuth provide a unique 3-legged relationship between the data platform, 3rd party service providers and students (users), that can be adopted to bring in existing industry partners, but more importantly provide a rich environment for new types of partners to evolve, that can improve the overall process and workflow a student experiences.

5. What current forms or programs that already reach prospective students or borrowers in distress could be expanded to include broader affordability or financial literacy information?

All government forms and programs should be evaluated for the pros / cons of an API program. My argument within this RFI response will be focused on a consistent API platform and strategy from the department of Education. APIs should be be part of every existing program change, and new initiatives in the future.

B. Potential Needs to be Filled by APIs

1. If APIs were available, what types of individuals, organizations, and companies would build tools to help increase access to programs to make college more affordable?

A consistent API platform and strategy from the department of Education will have two essential components, partner framework, and service composition. A partner framework defines which external, 3rd party groups can work with Department of Education API resources. The service composition defines how these 3rd party groups can can access and ultimately use Department of Education API resources.

All existing groups that the Department of Education interacts with currently should be evaluated for where in the API partner framework they exists, defining levels of access for general public, student up to certified and trusted developer and business partnerships.

The partner framework and service composition for the Department of Education API platform should be applied to all existing individuals, organizations and companies, while also allow for new actors to enter the game, and potentially redefining the partner framework and add new formulas for API service composition, opening up the possibilities for innovation around Department of Education API resources.

2. What applications and features might developers, schools, organizations, and companies take interest in building using APIs in higher education data and services?

As with which Department of Education forms and programs might have APIs apply, which individuals, organizations and companies will use APIs, the only way to truly understand what applications might developers, schools, organizations and companies put APIs cannot be know, until it is place. These are the questions an API centric company or institution asks of its API platform in real-time. You can’t define who will use an API and how they will use it, it takes iteration and exploration before successful applications will emerge.

3. What specific ways could APIs be used in financial aid processes (e.g., translation of financial aid forms into other languages, integration of data collection into school or State forms)?

When a resource is available via an API, it is broken down into the smallest possible parts and pieces possible, allowing them to be re-used, and re-purposed into every possible configuration management. When you make form questions independently available via an API, it allows you to possible reorder, translate, and ask in new ways.

This approach works well with forms, allowing each entry of a form to be accessible, transferable, and open up for access, with the proper permissions and access level that is owned by the person who owns the format data. This opens up not just the financial aid process, but all form processes to interoperate with other systems, forms, agencies and companies.

With the newfound modularity and interoperability introduced by APIs, the financial aid process could be broken down, allowing parents to take part for their role, schools for theirs, and allow multiple agencies to be engaged such as IRS or Department of Veterans Affairs (VA). All of this allows any involved entity or system to do its part for the financial aid process, minimizing the friction throughout the entire form process, even year over year.

4. How can third-party organizations use APIs to better target services and information to low-income students, first-generation students, non-English speakers, and students with disabilities?

Again, this is a questions that should be asked in real-time of a Department of Education platform. Examples of how 3rd party organizations can better target services and information to students, is the reason for an API platform. There is no way to no this ahead of time, I will leave to domain experts to attempt at answering.

5. Would APIs for higher education data, processes, programs or services be useful in enhancing wraparound support service models? What other types of services could be integrated with higher education APIs?

A sensibly design,deployed, managed and evangelized API platform would establish a rich environment for existing educational services to be augmented, but also allow for entirely new types of services to be defined. Again I will leave to domain experts to speak of specific service implantations based upon their goals, and understanding of the space.

C. Existing Federal and Non-Federal Tools Utilizing APIs

1. What private-sector or non-Federal entities currently offer assistance with higher education data and student aid programs and processes by using APIs? How could these be enhanced by the Department’s enabling of additional APIs?

There are almost 10K public APIs available in the private sector. This should be viewed as a pallet for developers, and something that developers use as they are developing (painting) their apps (painting). It is difficult for developers to know what they will be painting with, without knowing what resources are available. The open API innovation process rarely is able to articulate what is needed, then make that request for resources—API innovations occurs when valuable, granular resources are available fro multiple sources, ad developers assemble them, and innovate in new ways.

2. What private-sector or non-Federal entities currently work with government programs and services to help people fill out government forms? Has that outreach served the public and advanced public interests?

Another question that should be answered by the Department of of Education, and providing us with the answers. How would you know this without a properly definitely partner framework? Stand up an API platform, and you will have the answer.

3. What instances or examples are there of companies charging fees to assist consumers in completing otherwise freely available government forms from other agencies? What are the advantages and risks to consider when deciding to allow third parties to charge fees to provide assistance with otherwise freely available forms and processes? How can any risks be mitigated?

I can't speak to what is already going on in the space, regarding companies charging feeds to consumers, I am not expert on the education space at this level. This is just such a new paradigm made possible via APIs and open data, there just aren’t that many examples in the space, built around open government data.

First, the partner tiers of API platforms help verify and validate individuals and organizations who are building applications and charging for services in the space. A properly design, managed and policed partner tier can assist in mitigating risk in the evolution of such business ecosystems.

Second API driven security layers using oAuth give access to end-users, allowing students to take control over which applications and ultimately service providers have access to their data, revoking when services are done or a provider is undesirable. With proper reporting and rating systems, policing of the API platform can be something that is done within the community, and the last mile of policing being done by the Department of Education.

Proper API management practices provide the necessary identity, access and control layers necessary to keep resources and end-users safe. Ultimately who has access to data, can charge fees, and play a role in the ecosystem is up to Department of education and end-users when applications are built on top of APIs.

4. Beyond the IRS e-filing example, what other similar examples exist where Federal, State, or local government entities have used APIs to share government data or facilitate participation in government services or processes - particularly at a scale as large as that of the Federal Student Aid programs?

This is a new, fast growing sector, and there are not a lot of existing examples, but there area few:

Open311
An API driven system that allows citizens to report and interact with municipalities around issues within communities. While Open311 is deployed in specific cities such as Chicago and Baltimore, it is an open source platform and API that can be deployed to serve any size market.

Census Bureau
The US Census provides open data and APIs, allowing for innovation around government census survey data, used across the private sector in journalism, healthcare, and many other ways. The availability of government census data is continually spawning new applications, visualizations and other expressions, that wouldn’t be realized or known, if the platform wasn’t available.

We The People
The We The People API allows for 3rd-Party integration with the White House Petition process. Currently only allowing for read only access to the information, and the petition process, but is possibly one way that write APIs will emerge in federal government.

There are numerous examples of open APIs and data being deployed in government, even from the Department of Education. All of them are works in progress, and will realize their full potential over time, maturation and much iteration and engagement with the public.

D. Technical Specifications

1. What elements would a read-write API need to include for successful use at the Department?

There are numerous building blocks can be employed in managing read-write APIs, but there are a couple that will be essential to successful read-write APIs in government:

Partner Framework
Defined access tiers for consumers of API data, with appropriate public, partner and private (internal) levels of access. All write methods are only accessible by partner and internal levels of access, requiring verification and certification of companies and individuals who will be building on top of API resources.

Service Management
The ability to compose many different types of API resource access, create service bundles that are made accessible to different levels of partners. Service management allows for identity and access management, but also billing, reporting, and other granular level control over how services are composed, accessed and managed.

Open Authentication (oAuth 2.0)
All data made available via Department of Education API platforms and involves personally identifiable information will require the implementation of an open authentication or oAuth security layer. oAuth 2.0 provides an identity layer for the platform, requiring developers to use token that throttle access to resources for applications, a process that is initiated, managed and revoked by end-users—providing the highest level of control over who has access to data, and what they can do with it, by the people who personal data is involved.

Federated API Deployments
Not all APIs should be deployed and managed within the Department of Education firewall. API platforms can be made open source so that 3rd party partners can deploy within their own environments. Then via a sensible partner framework, the Department of Education can decide which partners they should not just allow to write to APIs, but also pull data from their trusted systems and open API deployments.

APIs provide the necessary access to all of federal government API resources, and a sensible partner framework, service management layer in conjunction with oAuth will provide the necessary controls for a read / write API in government. If agencies are looking to further push risk outside the firewall, federated API deployments with trusted partners will have to be employed.

2. What data, methods, and other features must an API contain in order to develop apps accessing Department data or enhancing Department processes, programs, or services?

There are about 75 common building blocks for API deployments (http://management.apievangelist.com/building-blocks.html), aggregated after looking at almost 10K public API deployments. Each government API will have different needs when it comes to other supporting building blocks.

3. How would read-only and/or read-write APIs interact with or modify the performance of the Department’s existing systems (e.g., FAFSA on the Web)? Could these APIs negatively or positively affect the current operating capability of such systems? Would these APIs allow for the flexibility to evolve seamlessly with the Department’s technological developments?

There are always risks with API access to resources, but a partner framework, service management, oAuth, and other common web security practices these risks can be drastically reduce, and mitigated in real-time

Isolated API Deployments
New APIs should rarely be deployed and directly connected to existing systems. APIs can be deployed as an isolated interface, with an isolated data store. Existing systems can use the same API interface to read / write data into the system and keep in sync with existing internal systems. API developers will never have access to existing system and data stores, just isolated, defined API interfaces as part of a secure partner tier, only accessing the services they have permission to, and the end-user data that has been given access to by end-users themselves.

Federated Deployments
As described above, if government agencies are looking to further reduce risk, API deployments can be designed and deployed as open source software, allowing partners with the ecosystem to download and deploy. A platform partner framework can provide a verification and certification process for federal API deployments, allowing the Department of Education to decide who they will pull data from, reducing the risk to internal systems, providing a layer of trust for integration.

Beyond these approaches to deploying APIs, one of the biggest benefits of web API deployments is they use the same security as other government websites, just possessing an additional layer of securing determining who has access, and to what.

It should be the rare instance when an existing system will have an API deployed with direct integration. API automation will provide the ability to sync API deployments with existing systems and data stores.

4. What vulnerabilities might read-write APIs introduce for the security of the underlying databases the Department currently uses?

As stated above, there should be no compromise in how data is imported into existing databases at the Department of Education. It is up to the agency to decide which APIs they pull data from, and how it is updated as part of existing systems.

5. What are the potential adverse effects on successful operation of the Department’s underlying databases that read-write APIs might cause? How could APIs be developed to avoid these adverse effects?

As stated above, isolated and external, federated API deployments will decouple the risk from existing systems. This is the benefit of APIs, is they can deployed as isolated resources, then integration and interoperability, internally and externally is up to the consumer to decide what is imported and what isn’t.

6. How should APIs address application-to-API security?

Modern API partner framework, service management and oath provide the necessary layer to identify who has access, and what resources can be used by not just a company and user, but by each application they have developed.

Routing all API access through the partner framework plus associated service level, will secure access to Department of Education resources by applications, with user and app level logging of what was accessed and used within an application.

OAuth provides a balance to this application to API security layer, allowing the Department of Education to manage security of API access, developers to request access for their applications, but ultimately control is in the hand of end users to define which applications have access to their data.

7. How should the APIs address API-to-backend security issues? Examples include but are not limited to authentication, authorization, policy enforcement, traffic management, logging and auditing, TLS (Transport Layer Security), DDoS (distributed denial-of-service) prevention, rate limiting, quotas, payload protection, Virtual Private Networks, firewalls, and analytics.

Web APIs use the exact same infrastructure as websites, allowing for the re-use of existing security practices employed for websites. However APIs provide the added layer of security, logging, auditing and analytics provided through the lens of the partner framework, service composition and only limited by the service management tooling available.

8. How do private or non-governmental organizations optimize the presentation layer for completion and accuracy of forms?

Business rules. As demonstrated as part of a FAFSA API prototype, business rules for each form field, along with rejection codes can also be made available via an API resources, allowing for developers to build in a form validation layer into all digital forms.

After submission, and the first line of defense provide red by API developers building next generation forms, platform providers can provide further validation, review and ultimately a status workflow that allows forms to be rejected or accepted based upon business logic.

9. What security parameters are essential in ensuring there is no misuse, data mining, fraud, or misrepresentation propagated through use of read- only or read-write APIs?

A modern API service management layer allows the platform provider to see all API resources that are being access, by whom, and easily establish patterns for healthy usage, as well as patterns for misuse. When misuse is identified, service management allows providers to revoke access, and take action against companies and individuals.

Beyond the platform provider, APIs allow for management by end-users through common oAuth flows and management tools. Sometimes end-users can identify an app is misusing their data, even before a platform provider might. oAuth gives them the control to revoke access to their data, via the API platform.

oauth, combined with API service management tooling has allowed for a unique security environment in which the platform can easily keep operations healthy, but end-users and developers can help police the ecosystem as well. If platform providers give users the proper rating and reporting tools, they can help keep API and data consumers in check.

10. With advantages already built into the Department’s own products and services (e.g., IRS data retrieval using FAFSA on the Web), how would new, third-party API-driven products present advantages over existing Department resources?

While existing products and services developed within the department do provide great value, the Department of Education cannot do everything on their own. Because of the access the Department has, some features will be better by default, but this won’t be the case in all situations.

The Department of Education and our government does not have unlimited resources, and with access to ALL resources available via the department the private sector can innovate, helping share the load of delivering vital services. Its not whether or not public sector products and services are better than private sector or vice vera, it is about the public sector and private sector partnering wherever and whenever it make sense.

11. What would an app, service or tool built with read-write API access to student aid forms look like?

Applications will look like turbotax and tax act developed within the IRS ecosystem, and look like the tools developed by the Sunlight Foundation on top of government open data and APIs.

We will never understand what applications are possible until the necessary government resources are available. All digital assets should be open by default, with consistent API platform and strategy from the department of Education, and the platform will answer this question.

E. Privacy Issues

1. How could the Department use APIs that involve the use of student records while ensuring compliance with potentially applicable statutory and regulatory requirements, such as the Family Educational Rights and Privacy Act (20 U.S.C. § 1232g; 34 CFR Part 99) and the Privacy Act (5 U.S.C. § 552a and 34 CFR Part 5b)?

As described above the partner framework, service management and oAuth layer provides the control and logging necessary to execute and audit as part of any application statutory and regulatory requirement.

I can’t articulate enough how this layer provides a tremendous amount of control over how these resources are access, giving control to the involved parties who matter the most—end-users. All API traffic is throttled, measured and reviewed as part of service management, enforcing privacy that in a partnership between the Department of Education, API consumers and end-users.

2. How could APIs ensure that the appropriate individual has provided proper consent to permit the release of privacy-protected data to a third party? How can student data be properly safeguarded to prevent its release and use by third parties without the written consent often required?

As articulated above the partner framework, service management and oAuth address this. This is a benefit of API deployment, breaking down existing digital access, providing access and granular control, combined with oAuth and logging of all access—APIs take control to a new level.

oAuth has come to represent this new balance in security and control of digital resources, allowing the platform, developers and end-users to execute within their defined role on the platform. This balance introduced by APIs and oAuth, allow data to be safeguarded, while also opening up for the widest possible use in the next generation applications and other implementations.

3. How might read-only or read-write APIs collect, document, and track individuals’ consent to have their information shared with specific third parties?

oAuth. Period.

4. How can personally identifiable information (PII) and other financial information (of students and parents) be safeguarded through the use of APIs?

Access of personally identifiable information (PII) via Department of Education APIs will be controlled by students and their parents. The most important thing you can do to protect PII is to give the owner of that data, education about how to allow developer access to it in responsible ways that will benefit them.

APIs open up access, while oAuth will give the students and parents the control they need to integrate with apps, and existing system to achieve their goals, while retaining the greatest amount of over safeguarding their own data.

5. What specific terms of service should be enabled using API keys, which would limit use of APIs to approved users, to ensure that information is not transmitted to or accessed by unauthorized parties?

A well designed partner layer would define multiple level of access, combined with sensible service packages, will establish the terms of service levels that will be bundled with API keys and oAuth level identity and access to personally identifiable information.

Common approaches to deploying partner layers with appropriate service tiers, using oAuth have been well established over the last 10 years in the private sector. Controlling access to API resources at a granular level, providing the greatest amount of access that makes sense, while knowing who is access data and how they are using is what APIs are designed for.

6. What are the relative privacy-related advantages and disadvantages of using read-only versus read-write APIs for student aid data?

You will face many of the similar privacy concerns whether an API is read or write. If it is personably identifiable information, read or write access to the wrong parties violates a student's privacy. Just ensure that data is updated via trusted application providers is essential.

A properly defined partner layer will separate who has read and who has write access. Proper logging and versioning of data is essential to ensure data integrity, allowing end-users to manage their data via an application or system with confidence.

F. Compliance Issues

1. What are the relative compliance-related advantages and disadvantages of using read-only versus read-write APIs for student aid data?

APIs provide a single point of access to student aid data. With the implementation of proper partner framework, service management and oAuth every single action via this doorway is controlled and logged. When it comes to auditing ALL operations whether it is from the public, partners or internal, APIs excel in satisfying compliance concerns.

2. How can the Department prevent unauthorized use and the development of unauthorized products from occurring through the potential development of APIs? How might the Department enforce terms of service for API key holders, and prevent abuse and fraud by non-API key holders, if APIs were to be developed and made available?

As described above the partner framework, service management and oAuth will provide the security layer needed to manage 99% of potential abuse, but overall enforcement via the API platform is a partnership between the Department of Education, API consumers as well as end-users. The last mile of enforcement will be executed by the Department of Education, but it will be up to the entire ecosystem and platform to police and enforce in real-time.

3. What kind of burden on the Department is associated with enforcing terms and conditions related to APIs?

The Department of Education will handle the first line of defense, in defining partner tiers and service composition that wraps all access to APis. The Department will also be the last mile of decision making and enforcement when violations occur. The platform should provide the data needed by the department to make decision as well as the enforcement necessary in the form of API key and access revocation, and banning apps, individuals and business from the ecosystem.

4. How can the Department best ensure that API key holders follow all statutory and regulatory provisions of accessing federal student aid funds and data through use of third-party products?

First line of define to ensure that API key holders follow all statutory and regulatory provision will be verification and validation of partners upon registration, applications going into production and availability in application galleries and other directories in which students discover apps.

Second line of defense will be reporting requirements and usage patterns of API consumers and their apps. If applications regular meet self-reporting requirements and real-time patterns establishing healthy or unhealthy behavior, they can retain their certification. If partners fail to comply they will be restricted from the API ecosystem.

Last line of defense is the end-users, the students and parents. All end-users need to be educated regarding the control they have, given reporting and ranking tools that allow them file complaints and rank the applications that are providing quality services.

As stated several times, enforcement will be a community effort, something the Department of Education has ultimate control of, but requires giving the community agency as well.

5. How could prior consent from the student whom the data is about be provided for release of privacy- protected data to third party entities?

An API with oAuth layer is this vehicle. Providing the access, logging all transactions, and holding all partners to a quality of service. All the mechanism are there, in a modern API implementation, the access just needs to be defined.

6. How should a legal relationship between the Department and an API developer or any other interested party be structured?

I’m not a lawyer. I’m not a policy person. Just can’t contribute to this one.

7. How would a legal relationship between the Department and an API developer or any other interested party affect the Department’s current agreements with third-party vendors that operate and maintain the Department’s existing systems?

All of this will be defined in each partner tier, combined with appropriate service levels. With isolated API deployments, this should not affect currently implementations.

However a benefit of consistent API strategy is that existing vendors can access resources via APis, increasing the agility and flexibility of existing contracts. APIs are a single point of access, not just for the public, but 3rd party partners as well as internal access. Everyone involved can participate and receive benefits of API consumption.

8. What disclosures should be made available to students about what services are freely available in government domains versus those that could be offered at a cost by a third party?

A partner tier for the API platform will define the different levels of partners. Trusted, verified and certified partners will get different recommendation levels and access than lesser known services, and applications from 3rd party with lesser trusted levels of access.

9. If the Department were to use a third-party application to engage with the public on its behalf, how could the Department ensure that the Department follows the protocols of OMB Memorandum 10-23?

Again, the partner tier determines the level of access to the partner and the protocols of all OMB memorandum call be built in. Requiring all data, APIs and code is open sourced, and uses appropriate API access tiers showing how data and resources are accessed and put to use.

API service management provides the reporting necessary to support government audits and regulations. Without this level of control on top of an API, this just isn’t possible in a scalable way, that APIs plus web and mobile applications offer.

G. Policy Issues

1. What benefits to consumers or the Department would be realized by opening what is currently a free and single-point service (e.g., the FAFSA) to other entities, including those who may charge fees for freely-available services and processes? What are the potential unintended consequences?

Providing API access to government resources is an efficient and sensible use of taxpayers money, and reflect the mission of all agencies, not just the Department of Education. APIs introduce the agility and flexibility needed to deliver the next generation government application and services.

The economy in a digital age will require a real-time partnership between the public sector and the private sector, and APIs are the vehicle for this. Much like it has done for private sector companies like Amazon and Google, APIs will allow the government to create new services and products that serve constituents with the help of the private sector, while also stimulating job growth and other aspects of the economy.

APIs will not all be an up-side, each program and initiative will have its own policy problems and unintended consequences. One problem that plagues API initiatives is enough resources in the form of money and skilled works to make sure efforts are successful. Without the proper management, poorly executed APIs can open up huge security holes, introduce privacy concerns at a scale never imagined.

APIs need to be managed properly, with sensible real-time controls for keeping operations in check.

2. How could the Department ensure that access to title IV, HEA student aid programs truly remains free, even amidst the potential development of third-party apps that may charge a fee for assistance in participating in free government programs, products, and services with or without providing legitimate value-added services?

Partner Framework + Service Management = Quality of Service Across Platform

3. What other policy concerns should the Department consider with regard to the potential development of APIs for higher education data and student aid processes at the Department?

Not a policy or education expert, I will leave this to others to determine. Also something that should be built into API operations, and discovered on a program by program basis.

4. How would APIs best interact with other systems already in use in student aid processes (e.g., within States)?

The only way you will know is if you do it. How is the IRS-efile system helping with this, but it isn’t even a perfect model to follow. We will never know the potential here until a platform is stood up, and resources are made available. All signs point to APIs opening up a huge amount of interoperability between not just states and the federal government, but also with cities and counties.

5. How would Department APIs benefit or burden institutions participating in title IV, HEA programs?

If APIs aren’t given the proper resources to operate it can introduce security, privacy and support concerns that would not have been there before. A properly run API initiative will provide support, while an underfunded, undermanned initiative will just further burden institutions.

6. While the Department continues to enhance and refine its own processes and products (e.g., through improvements to FAFSA or the IDR application process), how would third-party efforts using APIs complement or present challenges to these processes?

These two things should not be separate. The internal efforts should be seen as just another partner layer within the API ecosystem. All future service and products developed internally within the Department of Education should use the same API infrastructure developed for partners and the public.

If APIs are not used internally, API efforts will always fail. APIs are not just about providing access to external resources, it is about opening up the Department to think about its resources in an external way that benefits the public, partners as well as within the government.


Essential Variable in Big Data Algorithm: Transparency

It is easy to get excited about the potential around “big data”. Many individuals and companies feel this latest trend is all about offering up big data solutions with business models that are built around algorithms, that founders consider their “secret sauce”.

I don't have a problem with this, more power to you, however I personally feel big data solutions, especially those within government should be more transparent than many of the secret sauce, big data approaches we’ve seen to date.

Alex Howard (@digiphile) has a great post at TechRepublic, called data-driven policy and commerce requires algorithmic transparency, which outlines this very well. Alex uses the the phrase "algorithmic accountability”, which I think sums all of this up very nicely.

When it comes to big data solutions, especially in the public sector, it is fine to collect large amounts of data, offer up analytics, visualizations and other big data tools, but algorithmic accountability is something that will be essential in moving forward and building trust across all indusries when it comes to big data.


Open Data And API Efforts Rendered Useless When Privacy Is Ignored

On the second anniversary of the Open Government Partnership (OGP), where we are celebrating a "global effort to encourage transparent, effective, and accountable governance", and that:

OGP has grown to 60 countries that have made more than 1000 commitments to improve the governance of more than two billion people around the globe. OGP is now a global community of government reformers, civil society leaders, and business innovators working together to develop and implement ambitious open government reforms and advance good governance.

That is some pretty significant platform growth! While reading this I'm reminded of how any amount of perceived growth and value delivered via an "open data or API platform" can be immediately muted by the omission of very fundamental building blocks like privacy.

Let's review the building blocks of the Open Government Alliance:

  • Expand Open Data - Open Data fuels innovation that grows the economy and advances government transparency and accountability. Government data has been used by journalists to uncover variations in hospital billings, by citizens to learn more about the social services provided by charities in their communities, and by entrepreneurs building new software tools to help farmers plan and manage their crops. Building upon the successful implementation of open data commitments in the first U.S. National Action Plan, the new Plan will include commitments to make government data more accessible and useful for the public, such as reforming how Federal agencies manage government data as a strategic asset, launching a new version of Data.gov, and expanding agriculture and nutrition data to help farmers and communities.
  • Modernize the Freedom of Information Act (FOIA) - The FOIA encourages accountability through transparency and represents a profound national commitment to open government principles. Improving FOIA administration is one of the most effective ways to make the U.S. Government more open and accountable. Today, the United States announced a series of commitments to further modernize FOIA processes, including launching a consolidated online FOIA service to improve customers’ experience and making training resources available to FOIA professionals and other Federal employees.
  • Increase Fiscal Transparency - The Administration will further increase the transparency of where Federal tax dollars are spent by making federal spending data more easily available on USASpending.gov; facilitating the publication of currently unavailable procurement contract information; and enabling Americans to more easily identify who is receiving tax dollars, where those entities or individuals are located, and how much they receive.
  • Increase Corporate Transparency - Preventing criminal organizations from concealing the true ownership and control of businesses they operate is a critical element in safeguarding U.S. and international financial markets, addressing tax avoidance, and combatting corruption in the United States and abroad. Today we committed to take further steps to enhance transparency of legal entities formed in the United States.
  • Advance Citizen Engagement and Empowerment - OGP was founded on the principle that an active and robust civil society is critical to open and accountable governance. In the next year, the Administration will intensify its efforts to roll back and prevent new restrictions on civil society around the world in partnership with other governments, multilateral institutions, the philanthropy community, the private sector, and civil society. This effort will focus on improving the legal and regulatory framework for civil society, promoting best practices for government-civil society collaboration, and conceiving of new and innovative ways to support civil society globally.
  • More Effectively Manage Public Resources - Two years ago, the Administration committed to ensuring that American taxpayers receive every dollar due for the extraction of the nation’s natural resources by committing to join the Extractive Industries Transparency Initiative (EITI). We continue to work toward achieving full EITI compliance in 2016. Additionally, the U.S. Government will disclose revenues on geothermal and renewable energy and discuss future disclosure of timber revenues.

How can you argue with that? Its very sensible set of open government platform building blocks right? However, when you look at the bigger picture you realize there is a significant building block, that us in the tech sector have realized is essential to a healthy platform ecosystem missing:

  • Citizen Data Privacy - Ensuring that government respects the online privacy of each and every U.S. citizen, preventing unwanted harvesting of private data or meta data that exists in cloud environments, computer and mobile devices as well as transported across telecommunications infrastructure locally or abroad. When privacy is compromised in the name of law enforcement or national security, the laws, rules and procedures around these accepted situations are made publicly accessible.

It is great that our government is committed to expanding open data, increasing transparency and efficiently engaging citizens, and sensibly manage public resources. However if our government wants to act as an open platform, just like any private sector platform, they must respect user privacy.

Without ensuring privacy for users, it doesn't matter how forward thinking your open data, information and API strategy is. Privacy and security are essential building blocks any private or public sector entity looking to build an open platform.

Nice work around the Open Government Partnership, but without addressing the privacy of citizens it is rendered pretty useless.


Transparency Is Not Just About Github, Crowdsourcing, Open Source And Open APIs

I wrote a piece on the rollout of Healthcare.gov, and while there are numerous illnesses in the government that contributed to the launch being such a failure, my analysis took it up to the highest level possible, where the biggest problem can be attributed to a lack of transparency.

The post got a lot of comments via Twitter, LinkedIn, Facebook and other conversation threads I participated in from people who disagreed with me and kept interpreting my use of transparency as referring to using Github, crowdsourcing, open source software or APIs. Stating that these elements would not have saved the project, and we just needed to fix government contracting and get the right people on the job.

These responses fascinate me and reflect what I see from technologists across the space. Developers often bring their luggage with them and don't engage with people or read articles entirely, they bring their understanding of a certain word, attach and plow forward without critical analysis or deeper background research. I'm not exempt from this, I work hard to reverse this characteristic in my own personality.

What I mean by transparency is about letting the sunlight in to your overall operations, by default. In the case of Healthcare.gov, one of the numerous contractors applied this on front-end development, but the entire rest of the supply chain did not. The front-end group used Github, open source software, APIs and did crowdsource their work at several critical points of the development cycle. However, even this represents just the visible building blocks, not the resulting effects of "being transparent".

First and foremost, this approach to projects makes you the developer, project or product manager think differently about how you structure things. You know that your work will see the light of day and be potentially scrutinized by others. This makes you immediately think differently about how you work. There is no hiding in the shadows, where mistakes, cut-comers and your shortcomings cannot be hidden from the public.

Even if you don't use Github, listen to any comments or issues raised the public and keep all software proprietary and talk directly with code libraries and your database, but showcase the project work out in the open, you will see the benefits of transparency. It is just so happens that Github, establishing feedback loops, open source software and APIs help amplify transparency, and let in the healing benefits of sunlight.

There are numerous reasons I hear for NOT doing this. The true reasons are usually masked with the amount of additional resources needed for doing it this way or lack of expertise in open source projects, but really they tend to mask incompetency, insecurity, corruption or deep rooted beliefs that protecting your intellectual property will result in more money to be made.

Transparency isn't about a specific tools, platform or process. It is about opening up, letting other people in or possibly being almost entirely public in everything you do. Now I agree that not everyone is ready for this approach, and it may not be suited for every business sector, but I think you'd be surprised how easy it actually is, and how it can help you learn, grow and reduce the spread of illnesses within your project life cycle that may eventually cause you to fail.


Access, Interoperability, Privacy and Security Of Technology Will Set The Stage For The Future of Education

In 2010 when I started API Evangelist I saw the technological potential of APIs, but while the rest of the online space was focused on what APis could do for developers, I was focused on what APIs could do for the average person. APIs don't just open up access for developers, they open up access for end-users, introducing interoperability, data portability and ultimately tools that give them control over their own data, content and other valuable resources.

This realization has been central to my mission at API Evangelist, which is about educating the masses about APIs. What is an API? Why are APIs important? I strongly feel that APIs empower end-users to make better decisions about which platforms they use, which applications they adopt, and gives them more ownership, control and agency in their own worlds. When you help an individual understand they can host their own Wordpress blog and migrate from the cloud hosted version of Wordpress, or migrate their blog from Blogger to Wordpress via APIs, you are giving the gift of web literacy.

Leading technology platforms like Amazon, Google, eBay and Flickr have long realized the potential of opening up APIs and empowering end-users. Since then, thousands of platform providers have also realized that opening up APIs enables developers and end-users to innovate around their platform and services, and that there is much more opportunity for growth, expansion and revenue when end-users are API literate. Users are much more likely to adopt a platform and deeply integrate it into their personal or business lives, if they are able to connect it with their other cloud services, taking control and optimizing their information and work flow.

Helping business owners, developers and end-users understand the potential that APIs introduce is essential to the future of education, and will be the heart of a healthy and thriving economy. There is a key piece of technology that reflects this new paradign and is currently operating and thriving across the web, called oAuth. This open authentication (oAuth) standard provides the ability for platforms to open up access to content and data that enables developers to build web and mobile applications, but in a way that gives the control to end-users, who are ulimately the owners of a platforms content and data, and are the target of the applications that developers are building.

oAuth has introduced a new online dance, that is widely known as three-legged authentication, and is being used across common platforms from Google to Facebook, allowing end-users, developers and platforms to interact in a way that makes the Internet go round. If any of these three legs are out of balance and security or privacy is compromised, or one of the players is not educated and exploitation occurs, the cycle quickly breaks down. This delicate balance encourages all three legs to be educated, empowered and in control over their role in this critical supply chain of the Internet.

Online platforms, and the web and mobile applications that are built on them, are playing an ever increasing role in every aspect of our personal, professional and public lives, from turning in class assignments in high school to paying our taxes as adults. APIs and oAuth are being used as the pipes and gatekeepers for everything from photos and location data to our vital healthcare records. These online platforms will play a central role in our education from infancy to retirement, and being educated, aware and literate in how these platforms operate is essential to it all working--for everyone involved.

The future of education depends on all online platforms providing access, interoperability and data portability, while also fully respecting end-users privacy and security and investing in their education about these features and the opportunities they open up. Education will continue to exist within traditional institutions, but will persist throughout our lives in this new online environment. It is imperative that every citizen possesses a certain level of web literacy to be able to learn, grow and evolve as a human being in this increasingly digital society.

I will be speaking at OpenVA, Virginia’s First Annual Open and Digital Learning Resources Conference on this topic and continue to work this message into my overall API Evangelist message. The link between APIs, the access they provide, and education is critical. It is something that I feel provides just as many opportunity for exploitation as it does for benefiting end-users, developers and platforms--requiring a great deal of transparency and scrutiny.

Lots to think about, and discuss.  I look forward to seeing you at University of Mary Washington for OpenVA.


Lack of Transparency Is Healthcare.gov Biggest Bottleneck

If you pay attention to the news, you have probably heard about the technical trouble with the launch of the Affordable Care Act, 50 state marketplaces and the central Healthcare.gov site.

People across the country are encountering show-stopping bugs in the sign up process, and if you go to the healthcare.gov site currently, you get a splash page that states, "We have a lot of visitors on the site right now." If you stay on the page it will refresh every few seconds until, eventually you might get a successful registration form.

I worked at it for hours last night was finally able to get into the registration process, only to get errors several steps in, but eventually got through the flow and successfully registered for an account, scrutinizing the code and network activity behind the scenes as I went along.

There are numerous blog posts trying to break down what is going wrong with the Healthcare.gov registration process, but ultimately many of them are very superficial, making vague accusations of vendors involved, and the perceived technology at play. I think one of the better one's was A Programmer's Perspective On Healthcare.gov And ACA Marketplaces, by Paul Smith.

Late last night, the Presidential Innovation Fellows (PIF), led by round one PIF Phillip Ashlock(@philipashlock), set out to try and develop our own opinion about what is happening behind the scenes. Working our way through the registration process, trying to identify potential bottlenecks.

When you look at the flow of calls behind each registration page you see a myriad of calls to JavaScript libraries, internal and external services that support the flow. There definitely could have been more thought put into preparing this architecture for scaling, but a handful of calls really stands out:

https://www.healthcare.gov/marketplace/global/en_US/registration.js
https://www.healthcare.gov/ee-rest/ffe/en_US/MyAccountEIDMUnsecuredIntegration/createLiteEIDMAccount

The second URL pretty clearly refers to the Center for Medicare and Medicaid Services(CMS) Enterprise Identity Management (EIDM) platform, which provides new user registration, access management, identity lifecycle management, giving users of the Healthcare Exchange Plan Management can register and get CMS credentials. Where the registration.js appears handles much of the registration process.

Philip identified the createLiteEIDMAccount call as the most telling part of the payload and response, and would most likely be the least resilient portion of the architecture, standing out as a potentially severe bottleneck. The CMS EIDM platform is just one potential choke point, and isn't a bleeding edge solution, it is pretty straightforward enterprise architecture that may not have had adequate resources allocated to handle the load. I'm guessing underallocated server and application resources is playing a rampant role across Healthcare.gov operations.

Many of the articles I've read over the last couple days make reference to the front-end of Healthcare.gov in using Jekyll and APIs, and refer to the dangers of open washing, and technological solution-ism. Where this is most likely an under-allocated, classic enterprise piece of the puzzle that can't keep up. I do agree with portions of the open washing arguments, and specifically around showcasing the project as "open", when in reality the front-end is the only open piece, with the backend being a classic, closed architecture and process.

Without transparency into the entire stack of Healthcare.gov and the marketplace rollouts, it is not an open project. I don't care if any part of it is--making it open-washing. The teams in charge of the front-end were very transparent in getting feedback on the front-end implementation and publishing the code to Github for review. It isn't guaranteed, but if the entire backend stack followed the same approach, publishing technology, architectural approaches and load testing numbers throughout a BETA cycle for the project--things might have been different on launch day.

Transparency goes a long way into improving not just the technology and architecture, but can shed light on illnesses in the procurement, contracting and other business and political aspects of projects. Many technologists will default to thinking I'm talking about open source, open tools or open APIs, but in reality I'm talking about an open process.

In the end, this story is just opinion and speculation. Without any transparency into exactly what the backend architecture of Healthcare.gov and the marketplaces are, we have no idea of actually what the problem is. I'm just soapboxing my opinion like the authors of every other story published about this problem over the last couple days, making them no more factual than some of my other fictional pieces about this being an inside job or a cleverly disguised denial of service attack!


IRS Modernized e-File (MeF): A Blueprint For Public & Private Sector Partnerships In A 21st Century Digital Economy (DRAFT)

Download as PDF

The Internal Revenue Service is the revenue arm of the United States federal government, responsible for collecting taxes, the interpretation and enforcement of the Internal Revenue code.

The first income tax was assessed in 1862 to raise funds for the American Civil War, and over the years the agency has grown and evolved into a massive federal entity that collects over $2.4 trillion each year from approximately 234 million tax returns.

While the the IRS has faced many challenges in its 150 years of operations, the last 40 years have demanded some of the agency's biggest transformations at the hands of technology, more than any time since its creation.

In the 1970s, the IRS began wrestling with the challenge of modernizing itself using the latest computer technology. This eventually led to a pilot program in 1986 of an new Electronic Filing System (EFS), which aimed in part to gauge the acceptance of such a concept by tax preparers and taxpayers.

By the 1980s, tax collection had become very complex, time-consuming, costly, and riddled with errors, due to what had become a dual process of managing paper forms while also converting these into a digital form so that they could be processed by machines. The IRS despereatly needed to establish a solid approach that would enable the electronic submission of tax forms.

It was a rocky start for the EFS, and Eileen McCrady, systems development branch and later marketing branch chief, remembers, “Tax preparers were not buying any of it--most people figured it was a plot to capture additional information for audits." But by 1990, IRS e-file operated nationwide, and 4.2 million returns were filed electronically. This proved that EFS offered a legitimate approach to evolving beyond a tax collection process dominated by paper forms and manual filings.

Even Federal Agencies Can't Do It Alone

Even with the success of early e-file technology, the program did not get the momentum it needed without the support of two major tax preparation partnerships--H&R Block and Jackson-Hewitt. These helped change the tone of EFS efforts, making it more acceptable and appealing to tax professionals. It was clear that e-File needed to focus on empowering a trusted network of partners to submit tax forms electronically, sharing the load of tax preparation and filing with 3rd party providers. And this included not just the filing technology, but a network of evangelists spreading the word that e-File was a trustworthy and viable way to work with the IRS.

Bringing e-File Into The Internet Age

By 2000, Congress had passed IRS RRA 98, which contained a provision setting a goal of an 80% e-file rate for all federal tax and information returns. This, in effect, forced the IRS to upgrade the e-File system for the Internet age, otherwise they would not be able meet this mandate. A working group was formed, comprised of tax professionals and software vendors that would work with the IRS to design, develop and implement the Modernized e-File(MeF)-Program-Information) system which employed the latest Internet technologies, including a new approach to web services which used XML that would allow 3rd party providers to submit tax forms in a real-time, transactional approach (this differed from the batch submissions required in a previous version of the EFS).

Moving Beyond Paper One Form At A Time

Evolving beyond a 100 years of paper process doesn't happen overnight. Even with the deployment of the latest Internet technologies, you have to incrementally bridge the legacy paper processes to a new online, digital world. After the deployment of the MeF, the IRS worked year by year to add the myriad of IRS forms to the e-File web service, allowing software companies, tax preparers, and corporations to digitally submit forms into IRS systems over the Internet. Form by form, the IRS was being transformed from a physical document organization to a distributed network of partners that could submit digital forms through a secure, online web service.

Technological Building Blocks

The IRS MeF solution represents a new approach to using modern technology by the federal government in the 21st century Internet age. In the last 15 years, a new breed of Internet enabled software standards have emerged that enable the government to partner with the private sector, as well as other government agencies, in ways that were unimaginable just a decade ago.

Web Services

Websites and applications are meant for humans. Web services, also known as APIs, are meant for other computers and applications. Web services has allowed the IRS to open up the submission of forms and data into central IRS systems, while also transmitting data back to trusted partners regarding errors and the status of form submissions. Web services allow the IRS to stick with what it does best, receiving, filing and auditing of tax filings, while trusted partners can use web services to deliver e-Filing services to customers via custom developed software applications.

Web services are designed to utilize existing Internet infrastructure used for everyday web operations as a channel for delivering trusted services to consumers around the country, via the web.

An XML Driven Communication Flow

XML is a way to describe each element of IRS forms, and its supporting data. XML makes paper forms machine readable so that the IRS and 3rd party systems can communicate using a common language, allowing IRS to share a common set of logic around each form, then use what is known as schemas, to validate the XML submitted by trusted partners against a set of established business rules that provide enforcement of the IRS code. XML gives the ability for IRS to communicate with 3rd party systems using digital forms, applying business rules to reject or accept the submitted forms, which then can be stored in an official IRS repository in a way that can be viewed and audited by IRS employees (using stylesheets which make the XML easily readable by humans).

Identity and Access Management (IAM)

When you expose web services publicly over the Internet, secure authentication is essential. The IRS MeF system is a model for securing the electronic transmission of data between the government and 3rd party systems. The IRS has employed a design of the Internet Filing Application (IFA) and Application to Application (A2A) which are features of the Web Services-Interoperability (WS-I) security standards. Security of the MeF system is overseen by the IRS MITS Cyber Security organization which ensures all IRS systems receive, process, and store tax return data in a secure manner. MeF security involves an OMB mandated Certification and Accreditation (C&A) Process, requiring a formal review and testing of security safeguards to determine whether the system is adequately secured.

Business Building Blocks

To properly extend e-File web services to partners isn't just a matter of technology. There are numerous building blocks required that are more business than technical, ensuring a healthy ecosystem of web service partners. With a sensible strategy, web services need to be translated from tech to business, allowing partners to properly translate IRS MeF into e-filing products that will deliver required services to consumers.

Four Separate e-Filing Options

MeF provided the IRS with a way to share the burden of filing taxes with a wide variety of trusted partners, software developers and corporations who have their own software systems. However MeF is just one tool in a suite of e-File tools. These include Free File software that any individual can use to submit their own taxes, as well as free fillable digital forms that individuals can use if they do not wish to employ a software solution.

Even with these simple options, the greatest opportunities for individuals and companies is to use commercial tax software that walks one through what can be a complex process, or to depend on a paid tax preparer who employ their own commercial versions of tax software. The programmatic web service version of e-file is just one option, but it is the heart of an entire toolkit of software that anyone can put to use.

Delivering Beyond Technology

The latest evolution of the e-file platform has technology at heart, but it delivers much more than just the transmission of digital forms from 3rd party providers, in ways that also make good business sense:

  • Faster Filing Acknowledgements - Transmissions are processed upon receipt and acknowledgements are returned in near real-time, unlike the once or twice daily system processing cycles in earlier versions
  • Integrated Payment Option - Tax-payers can e-file a balance due return and, at the same time, authorize an electronic funds withdrawal from their bank accounts, with payments being subject to limitations of the Federal Tax Deposit rules
  • Brand Trust - Allowing MeF to evolve beyond just the IRS brand, allowing new trusted commercial brands to step up and deliver value to consumer, like TurboTax and TaxAct.

Without improved filing results for providers and customers, easier payment options and an overall set of expectations and trust, MeF would not reach the levels of e-Filing rates mandated by Congress. Technology might be the underpinning of e-File, but improved service delivery is the thing that will seal the deal with both providers and consumers.

Multiple Options for Provider Involvement

Much like the multiple options available for tax filers, the IRS has established tiers of involvement for partners to be involved with the e-File ecosystem. Depending on the model and capabilities, e-File providers can step up and be participate in multiple ways:

  • Electronic Return Originators (EROs) - ERO prepare returns for clients or have collected returns from taxpayers who have prepared their own, then begin the electronic transmission of returns to the IRS
  • Intermediate Service Providers - These providers process tax return data, that originate from an ERO or an individual taxpayer, and forward to a transmitter.
  • Transmitters - Transmitters are authorized to send tax return data directly to the IRS, from custom software that connect directly with the IRS computers
  • Online Providers - Online providers are a type of transmitter that sends returns filed from home by taxpayers using tax preparation software to file common forms
  • Software Developers - write the e-file software programs that follow IRS specifications for e-file.
  • Reporting Agents - An accounting service, franchiser, bank or other person that is authorized to e-file Form 940/941 for a taxpayer.

The IRS has identified the multiple ways it needed help from an existing, evolving base of companies and organizations. The IRS has been able to design its partner framework to best serve its mission, while also delivering the best value to consumers, in a way that also recognizes the incentives needed to solicit participation from the private sector and ensure efforts are commercially viable.

Software Approval Process

IRS requires all tax preparation software used for preparing electronic returns to pass the requirements for Modernized e-File Assurance Testing (ATS). As part of the process software vendors notify IRS via an e-help Desk, that they plan to commence testing, then provide a list of all forms that they plan to include in their tax preparation software, but do not require that vendors support all forms. MeF integrators are allowed to develop their tax preparation software based on the needs of their clients, while using pre-defined test scenarios to create test returns that are formatted in the specified XML format. Software integrators then transmit the XML formatted test tax returns to IRS, where an e-help Desk assister checks data entry fields on the submitted return. When IRS determines the software correctly performs all required functions, the software is approved for electronic filing. Only then are software vendors allowed to publicly market their tax preparation software as approved for electronic filing -- whether for usage by corporations, tax professionals and individual users.

State Participation

Another significant part of the MeF partnership equation is providing seamless interaction with the electronic filing of both federal and state income tax returns at the same time. MeF provides the ability for partners to submit both federal and state tax returns in the same "taxpayer envelope", allowing the IRS to function as an "electronic post office" for participating state revenue services -- certainly better meeting the demands of the taxpaying citizen. The IRS model provides an important aspect of a public / private sector partnership with the inclusion of state participation. Without state level participation, any federal platform will be limited in adoption and severely fragmented in integration.

Resources

To nurture an ecosystem of partners, it takes a wealth of resources. Providing technical, how-to, guides, templates and other resources for MeF providers is essential to the success of the platform. Without proper support, MeF developers and companies are unable to keep up with the complexities and changes of the system. The IRS has provided the resources needed for each step of the e-Filing process, from on-boarding, to how to understanding the addition of the latest forms, and changes to the tax code.

Market Research Data

Transparency of the MeF platform goes beyond individual platform operations, and the IRS acknowledges this important aspect of building an ecosystem of web service partners. The IRS provides valuable e-File market research data to partners by making available e-file demographic data and related research and surveys. This important data provides valuable insight for MeF partners to use in their own decision making process, but also provides the necessary information partners need to educate their own consumers as well as the general public about the value the e-File process delivers. Market research is not just something the IRS needs for its own purposes; this research needs to be disseminated and shared downstream providing the right amount of transparency that will ensure healthy ecosystem operations.

Political Building Blocks

Beyond the technology and business of the MeF web services platform, there are plenty of political activities that will make sure everything operates as intended. The politics of web service operations can be as simple as communicating properly with partners, providing transparency, or all the way up to security, proper governance of web service, and enforcement of federal laws.

Status

The submission of over 230 million tax filings annually requires a significant amount of architecture and connectivity. The IRS provides real-time status of the MeF platform for the public and partners, as they work to support their own clients. Real-time status updates of system availability keeps partners and providers in tune with the availability of the overall system, allowing them to adjust availability with the reality of supporting such a large operation. Status of availability is an essential aspect of MeF operations and overall partner ecosystem harmony.

Updates

An extension of MeF platform status is the ability to keep MeF integrators up-to-date on everything to do with ongoing operations. This includes providing alerts when the platform needs to tune-in platform partners to specific changes with tax law, resource additions, or other relevant news of operations. The IRS also provides updates via an e-newsletter, providing a more asynchronous way for the IRS MeF platform to keep partners informed about ongoing operations.

Updates over the optimal partner channels are an essential addition to real-time status and other resources that are available to platform partners.

Roadmap

In addition to resources, status and regular updates of platform status of the overall MeF system, the IRS provides insight into where the platform is going next, keeping providers apprised with what is next for the e-File program. Establishing and maintaining the trust of MeF partners in the private sector is constant work, and requires a certain amount of transparency -- allowing partners to anticipate what is next and make adjustments on their end of operations. Without insight into what is happening in the near and long term future, trust with partners will erode and overall belief in the MeF system will be disrupted, unraveling over 30 years of hard work.

Governance

The Modernized e-File (MeF) programs go through several stages of review and testing before they are used to process live returns. When new requirements and functionality are added to the system, testing is performed by IRS's software developers and by IRS's independent testing organization. These important activities ensure that the electronic return data can be received and accurately processed by MeF systems. Every time an IRS tax form is changed and affects the XML schema, the entire development and testing processes are repeated to ensure quality and proper governance.

Security

Secure transmissions by 3rd parties with the MeF platform is handled by the Internet Filing Application (IFA) and Application to Application (A2A), which are part of the IRS Modernized System Infrastructure, providing access to trusted partners through the Registered User Portal (RUP). Transmitters using IFA are required to use their designated e-Services user name and password in order to log into the RUP. Each transmitter also establishes a Electronic Transmitter Identification Number (ETIN) prior to transmitting returns. Once the transmitter successfully logs into the RUP, a Secure Socket Layer (SSL) Handshake Protocol allows the RUP and transmitter to authenticate each other, and negotiate an encryption algorithm, including cryptographic keys before any return data is transmitted. The transmitter’s and the RUP negotiate a secret encryption key for encrypted communication between the transmitter and the MeF system. As part of this exchange, MeF will only accommodate one type of user credentials for authentication and validation of A2A transmitters; username and X.509 digital security certificate. Users must have a valid X.509 digital security certificate obtained from an IRS authorized Certificate Authority (CA), such as like VeriSign or IdenTrust, then have their certificates stored in the IRS directory using an Automated Enrollment process.

The entire platform is accredited by the Executive Level Business Owner, who is responsible for the operation of the MeF system, with guidance provided by the National Institute of Standards (NIST). The IRS MITS Cyber Security organization and the business system owner are jointly responsible and actively involved in completing the IRS C&A Process for MeF, ensuring complete security of all transmissions with MeF over the public Internet.

A Blueprint For Public & Private Sector Partnerships In A 21st Century Digital Economy

The IRS MeF platform provides a technological blueprint that other federal agencies can look to when exposing valuable data and resources to other agencies as well as the private sector. Web services, XML, and proper authentication can open up access and interactions between trusted partners and the public in ways that were never possible prior to the Internet age.

While this web services approach is unique within the federal government, it is a common way to conduct business operations in the private sector -- something widely known as Service Oriented Architecture (SOA), an approach that is central to a healthy enterprise architecture. A services oriented approach allows organizations to decouple resources and data and open up very wide or granular levels of access to trusted partners. The SOA approach makes it possible to submit forms, data, and other digital assets to government, using XML as a way to communicate and validate information in a way that supports proper business rules, wider governance, and the federal law.

SOA provides three essential ingredients for public and private sector partnership:

  • Technology - Secure usage of modern approaches to using compute, storage and Internet networking technology in a distributed manner
  • Business - Adherence to government lines of business, while also acknowledging the business needs and interest of 3rd party private sector partners
  • Politics - A flexible understanding and execution of activities involved in establishing a distributed ecosystem of partners, and maintaining an overall healthy balance of operation

The IRS MeF platform employs this balance at a scale that is unmatched in federal government currently. MeF provides a working blueprint can be applied across federal government, in areas ranging from the veterans claims process to the financial regulatory process.

The United States federal government faces numerous budgetary challenges and must find new ways to share the load with other federal and state agencies as well as the private sector. A SOA approach like MeF allows the federal government to better interact with existing contractors, as well as future contractors, in a way that provides better governance, while also allowing for partnership with the private sector in ways that goes beyond simply contracting. The IRS MeF platform encourages federal investment in a self-service platform that enable trusted and proven private sector partners to access IRS resources in predefined ways -- all of which support the IRS mission, but provide enough incentive that 3rd party companies will invest their own money and time into building software solutions that can be fairly sold to US citizens.

When an agency builds an SOA platform, it is planting the seeds for a new type of public / private partnership whereby government and companies can work together to deliver software solutions that meet a federal agency's mission and the market needs of companies. This also delivers value and critical services to US citizens, all the while reducing the size of government operations, increasing efficiencies, and saving the government and taxpayers money.

The IRS MeF platform represents 27 years of the laying of a digital foundation, building the trust of companies and individual citizens, and properly adjusting the agency's strategy to work with private sector partners. It has done so by employing the best of breed enterprise practices from the private sector. MeF is a blueprint that cannot be ignored and deserves more study, modeling, and evangelism across the federal government. This could greatly help other agencies understand how they too can employ an SOA strategy, one that will help them better serve their constituents.

You Can View, Edit, Contribute Feedback To This Research On Github


API Transparency Report as Essential Building Block

After reading about Google’s release of their transparency report last month, I decided I would be addding API transparency reports to my list of essential building blocks for API owners. Since I wrote that, I’ve had a great post from EFF also stating that its time for transparency reports to become the new normal.  Where the EFF outlines the illness in our current approach:

When you use the Internet, you entrust your thoughts, experiences, photos, and location data to intermediaries — companies like AT&T, Google, and Facebook. But when the government requests that data, users are usually left in the dark.

In the United States, companies are not required by law to alert their users when they receive a government request for their data, and in some circumstances, they are explicitly prohibited from doing so. So it is up to us, the users to join organizations like the EFF and let all online service providers understand the importance of transparency reports becoming default operating procedure.

Currently, we are only seeing transparency reports from providers like Google, DropBox, LinkedIn, Twitter and a handful of other service providers. While I feel that all companies that run on users data should provide transparency reports, I’m going to focus on making sure it is standard operating procedures with companies that deliver APIs.

As a starting point for my API transparency report building block, I’m using the letter that was drafted by concerned privacy advocates who are calling for Microsoft to issue a transparency report on Skype, which it purchased in 2011 for $8.5 billion.

This letter states that:

Skype is a voice, video and chat communications platform with over 600 million users worldwide, effectively making it one of the world’s largest telecommunications companies. Many of its users rely on Skype for secure communications—whether they are activists operating in countries governed by authoritarian regimes, journalists communicating with sensitive sources, or users who wish to talk privately in confidence with business associates, family, or friends.

The Skype effort provides a beginning list of items all transparency reports should provide:

  1. Quantitative data regarding the release of user information to third parties, disaggregated by the country of origin of the request, including the number of requests made by governments, the type of data requested, the proportion of requests with which it complied — and the basis for rejecting those requests it does not comply with.
  2. Specific details of all user data a company currently collects, and retention policies.
  3. The companies best understanding of what user data third-parties, including network providers or potential malicious attackers, may be able to intercept or retain.
  4. Documentation regarding the current operational relationship between the company with and other third-party licensed users of company technology, including the companies understanding of the surveillance and censorship capabilities that users may be subject to as a result of using these alternatives.
  5. A company’s interpretation of its responsibilities under the Communications Assistance for Law Enforcement Act (CALEA), its policies related to the disclosure of call metadata in response to subpoenas and National Security Letters (NSLs), and more generally, the policies and guidelines for employees followed when a company receives and responds to requests for user data from law enforcement and intelligence agencies in the United States and elsewhere.

If your company is going to provide a service that users depend on, one that retains their personal data in any way--you should have a transparency report that educates users about how and what any government is accessing and using information about them.

This applies to any data collected or transmitted via APIs from a platform via 3rd party platforms and services. APIs owe it to end users to provide insight into how their data is being used by 3rd party application developers via oAuth, the same insight should be provided regarding how government organizations is accessing and using data as well.


Making Transparency Reports Standard Operating Procedure

Google released an update to the Transparency Report today, showing information regarding government requests for users' data on the Google Network.

Google's transparency report proves details on Government requests including who they come from, where the requests originate, types of requests and other critical information about how our government monitor us citizens and ultimately the entire world online.

There are stories on other blogs that state Facebook and other platforms should follow Google's lead with transparency reports of their own. I agree. We will never see our government mandate industry leaders to do this, so us people have to lobby corporations to follow Google's lead.

I see this type of transparency being critical to any company claiming they have an "open platform" or "open APIs". I will be adding transparency report to my list of legal building blocks for all API owners.

As more and more of our world, economy and daily lives moves online and are transmitted via APIs, investigations by our legal system and government are only going to grow. We need to establish a way to report on these inquiries in real-time and in an open way like Google has been doing for two years now.


Benefits of Transparency

One of the aspects of my current job that I enjoy, is the transparent nature of my role. My job is to generate attention for Mimeo and the Mimeo Connect Cloud Print API.

Let me give you an example. In various roles I've had at companies as lead developer, director of technology, or just a developer I spend a lot of time researching new technologies.

When I research new technologies I usually create some sort of overview document, a presentation and probably some sort of sample code or prototype. In legacy positions I do all this hard work, and usually submit to my boss and maybe do apresentationto other teams. Often times, my work goes nowhere.

In my new role as an API Evangelist I enjoy a lot of transparency. I get to talk about my work as I'm learning. For exmple I wrote these blog posts yesterday: My project work involves me doing a lot of research and testing, and I get to share this information in real-time. If the projects move forward I will blog about improvements as they happen. So my documentation and presentations often go public as I create them.

If a project doesn't go anywhere, the information is still out there. It is still on my blog and on my Twitter account. This can bring added SEO benefit to all my projects and help me in the future, you never know when it will come back around.

In other roles where I don't enjoy the same transparency this information would never see the light of day.

Open Data and Transparency for the Government

I am excited about the recent launch of Data.gov. I am very passionate about what API and open data access can do for companies and belive the same is true about Government.

Now that the site is launched....we need some data. Doesn't look like much is available quite yet. I queried all categories and all agencies and got eight results. Its a start.

I will be watching on a regular basis to see what data sets show up.

Hopefully many more agencies see the value in posting their data online and involving the public at large.

Recomendations for Government Transparency

I used up all my mojo on my last post, and really can't write much on this topic right now, but I have to at least share this posting I found tonight:

Recovery Recommendations From I School Faculty Sent to the US Government

It is the best recommendation I have read regarding how to change the Government and spend some recovery / stimulus money.

I will write more about it later, lots of thoughts swimming around.

If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.