Log in

Blog

  • August 20, 2018 9:00 AM | Data Coalition Team (Administrator)

    Guest blog by Robin Doyle, Managing Director, Office of Regulatory Affairs, J.P. Morgan Chase & Co.

    In May 2018, J.P. Morgan Chase published an article on the topic of data standardization, "Data Standardization - A Call to Action." The article called for the financial services industry, global regulators, and other stakeholders to make progress on addressing current deficiencies in financial data and reporting standards that would enhance the usability of the financial data for informational needs and risk management, as well as with innovative technologies like artificial intelligence and machine learning.

    The article called for both global and national regulators to review the state of data and reporting standardization, and to take action to make improvements within these areas. Within the United States, the need for such a review is urgent. The U.S. regulatory reporting framework is fragmented and there is a lack of coordination across agencies resulting in reporting requirements that are often duplicative and overlapping. Each agency is focused on collecting data in its own way, with its own definitions. This leads to higher cost for financial institutions to manage compliance and poorer quality/comparability of data for both regulators and firms.  

    Specifically, whenever new data or a report is requested with slight differences in definitions or granularity, it triggers a new reporting process, including reconciliations to other reports and U.S. GAAP numbers, as well as obtaining corresponding sign-offs and attestations. The lack of common data standards and reporting formats across the agencies make reporting complex and incomparable. Separate supervisory and examination processes occur as a result of the multi-agency, multi-reporting framework. Here are two areas that highlight the issues at hand:

    1. Top Counterparty Reporting: There are multiple reports that collect information about a firm’s top counterparties including the Office of the Comptroller of the Currency (OCC) Legal Lending Limit Report, Financial Stability Board (FSB) common data collection of Institution-to-Institution Credit Exposure Data (e.g., Top 50 Counterparty Report), Federal Financial Institutions Examination Council (FFIEC) 031 (i.e., bank “Call Report”), new Fed Single Counterparty Credit Limit Top 50 Report, and others. Each of these reports has a slightly different scope and use different definitions for aggregation of exposures resulting in significant work to produce the overlapping reports and explain the differences in the reported results.1

    2. Financial Institutions Classification: There are numerous reporting requirements – regulatory capital deductions (e.g., FFIEC 101 Schedule A & FR Y-9C, Schedule HC-R), risk-weighted assets (e.g., Asset Value Correlation calculation FFIEC 101 Schedule B), systemic risk (e.g., Federal Reserve’s Banking Organization Systemic Risk Report—FR Y-15 Schedule B), and liquidity (e.g., Complex Institution Liquidity Monitoring Report FR 2052a), among others – that aggregate and report data with the classification “Financial Institution,” each using a different definition of “Financial Institution.” While on the surface this may not seem complicated, the reality is firms have full teams of people who parse data across these different definitions to ensure reporting is done correctly and can be reconciled. In a large firm, efforts to create tagging systems to automate the parsing process can take years and multiple, additional headcount to implement.2

    The U.S. regulatory community is aware of this – in the commentary from the recent Y-15 information collection rule, the Federal Reserve acknowledges the conflict but does not address the burden:

    One commenter noted that the definition of ‘financial institution’ in the FR Y-15 is different from other regulatory reports and recommended aligning the varying definitions. In response, the Board acknowledges that its regulations and reporting sometimes use differing definitions for similar concepts and that this may require firms to track differences among the definitions. Firms should review the definition of ‘financial institution’ in the instructions of the form on which they are reporting and should not look to similar definitions in other forms as dispositive for appropriate reporting on the FR Y-15.

    These issues could be addressed through the use of common data and reporting standards across the agencies. The Financial Stability Oversight Council (FSOC) could take steps within its mandate to facilitate coordination among its member agencies towards the standardization of regulatory reporting requirements across the agencies.3

    The FSOC could initiate a review of the current state of data and reporting within the U.S. to identify overlapping and duplicative reporting requirements and opportunities to move from proprietary data standards to national and global standards. Based on the review, a roadmap could be established to address the issues and gaps identified. Innovative approaches to data collection, such as using single collections, could be established that are then shared among agencies and global reference data should be implemented in all cases where it exists. Further, mechanisms could be created to ensure better coordination among agencies in the process of rulemaking to avoid duplication and to leverage consistent, established data standards.

    The benefits of such improvements would be substantial. Better standardization of regulatory reporting requirements across the agencies would significantly improve the ability of the U.S. public sector to understand and identify the buildup of risk across financial products, institutions, and processes.

    Reducing duplication, streamlining reporting, and using data standards would lead to efficiency, saving time, and reducing costs that firms and regulators otherwise expend manually collecting, reconciling, and consolidating data. According to the U.S. Department of the Treasury’s Office of Financial Research (OFR), the estimated cost to the global industry from the lack of data uniformity and common standards runs into the billions of dollars.4

    Looking forward, having good quality, standardized data is an important stepping stone to reaping the benefits of the ongoing digitization of financial assets, digitization of markets and growing use of new, cutting-edge technologies, such as artificial intelligence. Many areas of the financial industry will be impacted, in some capacity, by these innovations in the coming years. These areas may include customer service, investment advice, contracts, compliance, anti-money laundering and fraud detection.

    We urge the U.S. regulatory community to heed this call to action.


    1. Links to report references: https://occ.gov/topics/credit/commercial-credit/lending-limits.html; https://www.fsb.org/policy_area/data-gaps/page/3/; https://www.fsb.org/wp-content/uploads/r_140506.pdf; https://www.newyorkfed.org/banking/reportingforms/FFIEC_031.html; https://www.federalreserve.gov/reportforms/formsreview/FR2590_20180620_f_draft.pdf)
    2. Links to referenced reports: https://www.ffiec.gov/forms101.htm; https://www.federalreserve.gov/reportforms/forms/FR_Y-1520170331_i.pdf; https://www.federalreserve.gov/reportforms/forms/FR_2052a20161231_f.pdf 
    3. Dodd-Frank Wall Street Reform and Consumer Protection Act, Sec. 112 (a)(2)(E)
    4. Office of Financial Research: Breaking Through Barriers Impeding Financial Data Standards (February, 2017).


  • August 07, 2018 9:00 AM | Data Coalition Team (Administrator)

    Last week, the Data Coalition responded to the newly released Federal Data Strategy which we summarized in a blog two weeks ago.

    The Federal Data Strategy is an outgrowth of the President’s Management Agenda, specifically Cross Agency Priority Goal #2 – Leveraging Data as a Strategic Asset, which is co-lead by the Office of Management and Budget (OMB), the Office of Science and Technology Policy (OSTP), the Department of Commerce (DOC), and the Small Business Administration (SBA). Administration officials within these agencies called for public feedback and are currently working through the responses. We expect to see more detailed plans between October and January 2019 (see page 11 of the recent action plan).

    Our response provided high-level commentary on the draft principles as well as six proposed use cases that the Administration could potentially work into the prospective Data Incubator Project.


    Commentary on Draft Principles:

    The Federal Data Strategy proposes 10 principles spread across three categories: StewardshipQuality, and Continuous Improvement.

    Overall, we emphasized the benefits of assuring federal data assets are in open and machine-readable formats that impose uniform and semantic structure on data, thus mitigating organizational uncertainties and expediting user development. We also discussed the importance of pursuing data standardization projects that identify common data elements across organizations and enforce standards.

    For Stewardship, it is important to ensure that data owners and end users are connected in ways that assure data is presented in useful ways and that data quality can be continuously improved.

    With regards to Quality, it is important to establish policies to assure core ‘operational’ and ‘programmatic’ data assets are accurate, consistent, and controlled. We note simply that it is at the point of ingestion that any data standards or quality thresholds should be enforced. As a starting place for data strategy principles, we recommend incorporating the open data the principles identified by the CIO Council’s Project Open Data.

    And finally, for Continuous Improvement, we recommend that data should be made available in open formats for bulk download by default. This allows for maximum stakeholder engagement from the beginning.


    Six Proposed Open Data Use Cases:

    We also propose the following six use cases for the Administration to work on:

    Use Case 1: Fix Public Company Filings (Access, use, and augmentation)

    The Securities and Exchange Commission (SEC) requires public companies to file financial statements in standardized XBRL format, but the standard has complications. Currently, the format allows for too much custom tagging, inhibiting the goals of comparability and transparency. The Administration should work with the SEC and the Financial Accounting Standards Board (FASB) to ensure that the U.S. Generally Accepted Accounting Principles (US GAAP) taxonomy enforces FASB rules as the true reference for all elements in the taxonomy, thus eliminating unnecessary tags, reducing overall complexity, and minimize the creation of extension data elements. This will ultimately improve comparability and data quality.

    Use Case 2: Documents to Data in Management Memorandum (Decision-making and Accountability)

    Congress has already taken on the challenge of adopting a data standard for laws and mandates via the United States Legislative Markup (USLM), which provides a framework for how the Administration can transform federal documents into open data. The Administration should publish federal management guidance in integrated, machine-readable data formats instead of documents. This will allow agencies to better understand how policies integrate with each other and thus work to comply more readily, and allow the public and Congress to better understand the specific factors guiding and constraining agency programs and leadership.

    Use Case 3: Entity Identification Working Group (Enterprise Data Governance)

    Currently, the federal government uses a variety of different codes to identify companies, nonprofits, and other non-federal entities, which makes matching data sets across federal agencies a time-consuming and expensive undertaking. Adoption of the Legal Entity Identifier (LEI) as the default identification code for legal entities will enable agencies to aggregate, compare, and match data sets critical to their regulatory and programmatic missions.

    Use Case 4: Mission Support or Operational Data Standards Coordination (Decision-Making and Accountability)

    Treasury and the Office of Management and Budget (OMB) have spent over four years working to establish and integrate the DATA Act Information Model Schema (DAIMS), which links budget, accounting, procurement, and financial assistance datasets – operational data – that were previously segmented across federal agencies. The Administration should utilize the DAIMS for modernizing the annual budget process, agency financial reporting, and agency performance reporting, thus allowing for easy use of data to compare, justify, and plan budget goals and agency spending.

    Use Case 5: Mission or Programmatic Data Standards Coordination (Enterprise Data Governance; Decision-Making and Accountability; Access, Use, and Augmentation)

    To build a common approach to multi-agency programmatic data sharing, the Departments of Homeland Security and Health and Human Services created the National Information Exchange Model (NIEM), which maintains a data dictionary of common fields allowing agencies to create formats using those fields. The Administration should consider endorsing NIEM as the government-wide default for programmatic data standardization and publication projects. This will afford agencies the easier path of reusing common data fields of the NIEM Core, rather than building their own data exchanges and reconciliation processes.

    Use Case 6: Establish A Standard Business Reporting Task Force to Standardize Regulatory Compliance (Enterprise Data Governance; Access, Use, and Augmentation)

    Standard Business Reporting (SBR), which has been fully implemented in Australia, demonstrates that regulatory agencies can reduce the compliance burden on the private sector by replacing duplicative forms with standardized data, governed by common data standards across multiple regimes. The Administration should convene a task force representing all major U.S. regulatory agencies to create a roadmap for standardizing the data fields and formats that they use to collect information from the private sector. While the full implementation of a U.S. SBR program would require a multi-year effort, the creation of an exploratory task force would put the policy barriers and necessary investments into scope.

    Other Organization’s Feedback Echo an Open Data Approach

    While the responses have not yet been made public in a central portal, we have gathered a few of the key submissions.

    The Bipartisan Policy Center (BPC) has issued two separate comment letters. The first letter, on behalf of the former leadership of the Commission on Evidence-Based Policymaking, summarizes the Commission’s recommendations. Their second letter summarizes recommendations made by the BPC coordinated Federal Data Working Group, which the Data Coalition works with. Here we have joined a call to clarify the guidance from the 2013 open data Executive Order (M-13-13) (e.g., define “data asset” and renew the focus on data inventories), leverage NIEM to develop data standards, look into harmonizing entity identifiers across agencies, explore preemptive implementation of the Foundations for Evidence-Based Policymaking Act (H.R. 4174), which includes the OPEN Government Data Act, and to define terminology for types of public sector data (i.e., similar to our comment’s demarcation between operational and programmatic data).    

    The Center for Data Innovation (CDI) think tank also provided feedback that calls for the administration to support the passage of the OPEN Government Data Act as “the single most effective step” the administration could take to achieve the goals of the Federal Data Strategy. Additionally, CDI calls for improvements to data.gov’s metadata, for OMB to establish an “Open Data Review Board” for incorporating public input in prioritizing open data projects, and for the Administration to establish “data trusts” to facilitate sharing of non-public data. Lastly, they make the point to consider how the Internet of Things (IoT) revolution and Artificial Intelligence (AI) should be included in the conversation.

    The data standards organization XBRL-US recommends that the Administration “require a single data standard for all financial data reporting…to establish a single data collection process,” adopt the Legal Entity Identifier for all entities reporting to the federal government and use automated validation rules to ensure data quality at the point of submission.

    The new State CDO Network sent a letter emphasizing the important role of State and local governments. They wrote,“[States are] in the unique position of creating and stewarding data based on federal requirements,” while calling for a formal plan to leverage administrative data to address fraud, waste, and abuse.

    The Preservation of Electronic Government Information (PEGI) Project calls for an advisory board to make recommendations on data management and stewardship while echoing our call to utilize the open government data principles and also incorporate the FAIR lifecycle data management principles. PEGI also calls for scalable and automated processes for maximizing the release of non-sensitive data on data.gov.

    Lastly, the American Medical Informatics Association (AMIA) identifies the publication and the harmonization of data dictionaries across agencies as two fundamental activities. They also call for collecting and creating information in ways that support “downstream information processing and dissemination,” establish a framework to help agencies implement a “portfolio approach” to data asset management, and for the Administration to extend the concept of “data as an asset” to information produced by federal grant recipients and contractors.

    The Data Coalition will be working with these groups and others to align the Administration’s efforts to establish a pragmatic, sustainable, and effective Federal Data Strategy.


  • March 30, 2018 9:00 AM | Data Coalition Team (Administrator)

    The inaugural RegTech Data Summit’s thesis was that regulatory rules, technology, and data must be modernized in a coordinated fashion. If all three areas are modernized in tandem, new RegTech solutions will flourish, reducing reporting duplication, minimizing reporting errors, and enabling automation.

    When Regulation, Technology, and Data intersect – change happens.

    Over 400 participants, 37 speakers, three live technology demos, and over 20 exhibitors agreed: we were right.

    Throughout the day, we looked at the state of regulatory reporting regimes, solutions that exist, and what could be improved by modernization – “What is?” and “What if ?”

    Here are my top 10 moments:

    1. SEC: RegTech can “make everyone’s lives easier”
    2. XBRL has an image problem
    3. A vision for a universal, non-proprietary identifier
    4. Demo: Achieving an open law vision
    5. Demo: Blockchain for continuous audits
    6. Demo: SBR is happening down under; businesses and government are both saving
    7. Silicon Valley Keynote: Private-public collaboration is key
    8. A coming convergence of regulation, technology, and data
    9. Looking ahead: What does the future look like for RegTech?
    10. The numbers

    Let’s dive into each of these movements!

    1. SEC keynote: RegTech can “make everyone’s lives easier”

    From left to right: Yolanda Scott Weston, a principal at Booz Allen Hamilton and Michael Piwowar, Commissioner, SEC

    SEC Commissioner Michael Piwowar kicked off the RegTech Data Summit. The Commissioner outlined his definition of RegTech:

    “[C]overs the use of technology by regulators to fulfil their duties in a more thorough and efficient manner…. RegTech also refers to the use of technology by regulated entities to streamline their compliance efforts and reduce legal and regulatory costs. Most importantly, the term covers collaboration between private and public actors to take advantage of existing technologies to make everyone’s lives easier.”

    • Watch SEC Commissioner Piwowar’s keynote address.

    2. XBRL has an image problem

    From left to right: Leslie Seidman, Former Chair, Financial Accounting Standards Board and Giancarlo Pellizzari, Head, Banking Supervision Data Division, European Central Bank

    • What is? Currently, the SEC has a dual reporting regime–HTML filing and XBRL filing. That’s burdensome! For over three years, the Data Coalition has been advocating for the SEC to move away from this duplicative reporting system. Leslie Seidman, former FASB Chairman, noted that “XBRL is suffering from a image crisis… most companies view this as a burden that’s deriving them no benefit whatsoever.”
    • What if? Seidman went on to recommend how advocates of structured data should describe its benefits to corporate America,

    “[S]how [corporations] the extent of manual effort compared to an automated process using XBRL data–that alone by combing the processes… you will clearly be saving money because you would have one group who is responsible for preparing and issuing those financial report… Talk candidly and influentially about these potential risks and costs that exist now, and how the iXBRL solution will actually reduce their risk and cost.”

    • Our Coalition strongly supports the Financial Transparency Act (FTA) (R. 1530(summary here), currently pending in the U.S. House, to direct the SEC to replace its duplicative documents-plus-data system with a single submission, both human- and machine-readable. The bill has 32 bipartisan cosponsors. When passed, it will be the nation’s first RegTech law.

    3. A vision for a universal, non-proprietary identifier

    • What is? Regulatory agencies use (approximately) 18 different identifiers to track the entities they regulate; companies maintain internal IDs; and proprietary ID systems are fatally flawed. There is no ID to rule them all. The hodgepodge of identifiers impedes technological solutions, frustrates compilers and enforcers, and wastes everyone’s time.

    4. Demo: A vision for machine-readable regulation

    • What is? Member company Xcential demoed how Congress is moving away from static documents to adopt open data standards. Xcential is helping theClerk of the House and the Office of the Law Revision Counsel create and apply an open data format to legislative materials; the project is known as the S. House Modernization project. Xcential’s software can natively draft and amend bills using the XML-based U.S. Legislative Model. Other projects Xcential is working on include the U.K. Legislative Drafting, Amending, & Publishing Programme, which is publishing rules, regulations, orders, directives, proclamations, schemes, and by-laws (bye-laws) in open data formats, fully machine-readable!

    • What if? If laws, regulations and other legal materials were all published in open formats, machine-readable regulation will improve compliance, reduce human effort, and shrink compliance costs. Legislation to keep an eye on: The Searchable Legislation Act (SLA) (R. 5143); The Statutes at Large Modernization Act (SALMA) (H.R. 1729); The Establishing Digital Interactive Transparency (EDIT) Act (H.R. 842).

    5. Demo: Blockchain for continuous audit

     

     

    • What is? Audits are burdensome, costly, and time consuming, and provide only a once-a-year or once-a-quarter picture of an organization’s finances.
    • What if? Auditchain co-founder Jason Meyers and director of assurance and XBRL architecture Eric Cohen demoed a blockchain-based approach to audits that focuses on a continuous stream of an organization’s transactions, instead of annual or quarterly reports. Summit audience saw a fully integrated continuous audit and reporting ecosystem for traditional and decentralized enterprises. The audit data combines a standardized transaction model on the back end and XBRL on the front end for dynamic, customizable reports for stakeholders.

     

     

    6. Demo: SBR is happening down under; businesses and government are both saving

    • What is:

    • What if? Matt Vickers of Xero outlined what the benefits would be for the United States if Standard Business Reporting (SBR) was adopted, “the economy is 15 times larger, and the tax system and regulatory complexity is comparable. The steps that need to be taken to ensure SBR is successfully implemented include: “1. Developing a single taxonomy and 2. Engaging early with software companies.”
    • The Data Coalition continues to push for legislation to replace financial regulatory documents with open data and support a longer-term move towards SBR in the United States. Our sister organization, the Data Foundation, explained how this might work in a report last year, co-published with PwC: Standard Business Report: Open Data to Cut Compliance Costs.

    7. Silicon Valley Keynote: Private-public collaboration is key

    • Joe Lonsdale, co-founder of Palantir and OpenGov, delivered a comepling keynote address on how our government and Silicon Valley can partner to improve the way government collects and publishes regulatory information. Here’s a snippet:

    “It is possible for a team of enterpurers to very meaningfully impact government… I don’t think these things get fixed by insiders. It’s just not how the world ever works. It is always outsiders partnering with allies in the inside and figuring out how to adopt technology that’s going to upgrade all these processes.”

    • Joe announced the founding of a new startup, Esper, which will work with regulatory agencies to automate the rulemaking process. Watch his keynote address here!

    8. A coming convergence of regulation, technology, and data


    • What is? Francis Rose, host of Government Matters, moderated the Convergence Panel, which featured insights that panelists had learnt throughout the day, and brought the day’s theme together: regulation, technology, and data must be modernized in a coordinated fashion to enable RegTech solutions. Panelists agreed this is “no easy task.”
    • What if? Panelist Adam White of GMU said it best when he described what needs to happen for regulation, technology, and data to be modernized: “Agencies need to be brought together in a collaborative way … that would benefit immensely from standardized data and more transparency, allowing agencies to work on a common basis of facts across the board.”

    9. Looking ahead: What does the future look like for RegTech?

    • What is? More than 200 regulators on the local, state and federal levels have disparate systems. The regulators continue to collect document-based filings, rather than using existing RegTech solutions to collect the information as open, standardized data. And they continue to issue regulations as documents, rather than exploring machine-readable regulation.

    From left to right: Steven Balla, GMU, Jim Harper, Competitive Enterprise Institute (former), and Sarah Joy Hays, Data Coalition

    • What if? Steven Balla of GMU said THREE things need to happen to transform regulatory information collection and rulemaking from documents into data: 1. “Agency leaders need to position themselves and their organization as innovators. We can underestimate the importance of allies with agencies; 2. We need the relevant authorities in the Executive Branch to have the coordination function, specifically OMB’s Office of Information and Regulatory Affairs; 3. [and,] finally, leadership on Capitol Hill. There is nothing more effective than a law or budget to move organizational behavior.”
    • Jim Harper formerly of the Competitive Enterprise Institute got right to the point: “To get all agencies on the same data standards there is on one hand, shame, and then there is political imperative.”

    10. The numbers


  • March 23, 2018 9:00 AM | Data Coalition Team (Administrator)

    This week the White House endorsed a data-centric approach to modernizing and restoring trust in government. For data companies and data transparency, the newly-unveiled President’s Management Agenda (PMA) does not disappoint.

    Where did this agenda come from?

    A year ago the White House issued an Executive Order, the Comprehensive Plan for Reorganizing the Executive Branch, and a corresponding government-wide reform plan (see M-17-22). Our prior blog on the reform plan implored the Administration to make operational and material data a central focus in modernizing government management.

    With the release of the PMA, that is what the White House has done.

    The PMA’s Fourteen Goals: A Performance Agenda Grounded in Law

    As a whole, the PMA should be read in concert with the President’s Fiscal Year 2019 budget request and the corresponding agency 2018-2022 strategic plans. However, thanks to the Government Performance and Results Act (GPRA) Modernization Act of 2010 (P.L. 111-352), which established Performance.gov, you do not need to laboriously parse out the individual goals from these reams of disconnected documents.

    Instead, the PMA is broken down into fourteen discrete Cross-Agency Priority (CAP) Goals, representing the GPRA Modernization Act’s requirement for the executive branch to “identify major management challenges that are Governmentwide or crosscutting in nature and describe plans to address such challenges.”

    The unique quality of these CAP Goals is that they are “long[-]term in nature.” In GPRA, Congress designed the concept of “agency priority goals” to span Presidential transitions. In the law, “cross-agency goals” are on a four year lifecycle with a requirement that they be established a full year after a new President takes office (see Sec. 5). We saw the benefits of this structure throughout 2017, as the previous Administration’s “Open Data” CAP Goal empowered agency leaders to keep pursuing data reforms through the first year of the new Administration’s transition (see the 2014-2018 goals archived here).

    Each CAP Goal names specific leaders who will be accountable for pursuing it. This accountability helps motivate progress and insulate from politics.

    Driving the PMA: “an integrated Data Strategy”

    With the PMA, the White House is putting data and data standards at the center of federal management. This matches our Coalition’s prior recommendations, and is good news for data companies and data transparency.

    The PMA identifies three overarching “drivers” of transformation: first, a focus on the government’s systems with IT Modernization (Goal 1: Modernize IT to Increase Productivity and Security); second, an integrated strategy around Data Accountability, and Transparency (Goal 2: Leveraging Data as a Strategic Asset); and third, improved Workforce management (Goal 3: Developing a Workforce for the 21st Century).

    These three drivers, and the three CAP goals that correspond to them, intersect with the PMA’s eleven other CAP goals (see image).

    The White House’s decision to clearly separate IT systems from data (and data standards) is the right approach. The government’s data can be standardized and made more useful, and more transparent, without requiring major system changes.

    Therefore, the Data Coalition applauds the PMA’s central focus on the government’s need for “a robust, integrated approach to using data to deliver on mission, serve customers, and steward resources”–a focus that will now guide this Administration.

    Last July we made three recommendations for the PMA along these lines. We are pleased to see all three recommendations reflected in the final product.

    First, we recommended that “OMB should adopt the DATA Act Information Model Schema (DAIMS) as the primary government-wide operational data format to align various agency business functions.” That’s exactly what Goal 2 of the PMA now does.

    The “Access, Use, and Augmentation” strategy for Goal 2 “will build on work like the DATA Act Information Model Schema (DAIMS)” (see page 16 of the PMA) and “promote interoperability, data standardization, and use of consensus standards, specifications, metadata, and consistent formats” (page 8 of the action plan). This syncs with the Treasury Department’s recently-released Strategic Plan, which states that ”[the DAIMS] can be expanded to include other administrative data and link more domains across the federal enterprise…to support decision-making and provide metrics for evaluating program performance and outcomes” (see page 30). The budget request backs this up with potential increased funding for the Treasury’s Bureau of the Fiscal Service which would have resources for “continued operational support for execution of the [DATA Act]” (see pages 21-22).

    Second, we recommended that the Administration leverage the work of the National Information Exchange Model (NIEM) for data governance work and information exchange across the government. If you read the PMA’s Goal 2 together with the 2019 budget request, you will find this recommendation validated as well.

    The “Enterprise Data Governance” strategy for Goal 2 calls for “develop[ing] a coordinated approach to managing communities of stakeholders in the Federal community and among external constituents” and better coordination of “existing governance bodies” (see page 7 of the action plan). Additionally, the 2019 budget request’s analytical perspective on “Building and Using Evidence to Improve Government Effectiveness” calls for the “development of interoperable data systems, which can communicate and exchange data with one another while maintaining the appropriate privacy and security protections” as “critical to realiz[ing] the full potential of shared administrative data.” The budget request goes on to praise NIEM as a model “data exchange at all levels of government across program areas…in partnership with private industry stakeholders and state/local partners” (see page 5 of the analytical perspective).

    Third, we supported the continued implementation of the Technology Business Model (TBM), a private-sector framework that helps organizations standardize data classifying technology investments, and recommended alignment with the DATA Act’s DAIMS.

    In the PMA, TBM is listed alongside the DAIMS in Goal 2 (see page 16 of the PMA) and names the DATA Act as a supporting program in Goal 10: Federal IT Spending Transparency (see page 10 of the action plan).  

    The PMA’s Other Goals: Grants Accountability, Paperless Forms, and (Maybe) Guidance Documents Expressed as Data

    Across the PMA’s other CAP Goals, we see a consistent data-centric approach and continued alignment with the Data Coalition’s Policy Agenda.

    As we celebrated yesterdayGoal 8: Results-Oriented Accountability for Grants “recognizes that recipient burden (such as excessive compliance requirements) can be reduced if grant reporting data is standardized” (see page 5 of the action plan). This aligns with the objectives of the Grant Reporting Efficiency and Agreements Transparency (GREAT) Act (H.R. 4887), that we are advocating for, and is making fast progress in Congress (see more).

    Goal 4: Improving Customer Experience introduces a “Paperless Government Project,” led by the US Digital Service, which would help agencies reduce redundant and unnecessarily complex forms. The Data Coalition is pushing reforms across a number of fronts that would apply open data concepts to simplify complex regulatory reporting (for instance, Standard Business Reporting).

    And Goal 6: Shifting From Low-Value to High-Value Work seeks to establish “regular processes to assess the burden [of OMB’s management guidance] on agencies and to rescind or modify requirements over time” (see page 5 of the action plan). The way to create such processes is for OMB to publish its guidance in integrated, machine-readable data formats instead of documents.3 Our work to pursue “open data for laws and mandates” provides a use case for exactly the same transformation, starting with Congressional laws, bills, and amendments.

    Each of the CAP Goals identify the senior executives who will be accountable for delivering  These promised reforms. We commend the administration for explicitly recognizing both the executives accountable for these goals as well as the career staff who will be managing these efforts over the next four years.

    The Road Ahead

    As all this work takes shape, it will be important to remember the guiding statements which set the stage at the PMA’s launch event. Newly-appointed Office of Management and Budget Deputy Director for Management Margaret Weichert called data “a foundational asset to driving economic growth in innovation.” Incoming US Chief Information Officer Suzette Kent echoed with a call for a “strategic view of data as one of our mission critical assets.” It will be up to these new leaders to turn the PMA’s vision into a reality.

    The Data Coalition will continue to support data transparency and data standardization–which means we will work hard hold the Administration accountable to these well-stated goals.


  • March 22, 2018 9:00 AM | Data Coalition Team (Administrator)

    The White House has published a plan to transform federal grant reporting from disconnected documents into open, standardized data.

    The Data Coalition views this as a big step forward! Supported by StreamLink Software other leading data companies, we’ve been pushing for open data in grant reporting since 2013.

    Last Tuesday, as part of the long-awaited release of the President's Management Agenda, the White House announced fourteen new government-wide goals. Goal number 8 of these is “Results Oriented Accountability for Grants.”

    The White House recognizes that the government’s current system of grant reporting creates challenges for grantor agencies, grantees, and the populations they serve. Grantees must fill out complicated document-based forms to report on their receipt and use of grant funds.

    As a result, their managers report spending 40% of their time on compliance, according to a survey by REI Systems, the National Grants Management Association, and the George Washington University.

    Meanwhile, because these forms are submitted to over 2,200 separate program offices across the government, transparency is difficult. There is no easy way for agencies, beneficiaries, or the public to see a grantee’s performance across multiple programs.

    CURRENT STATE: According to the Data Foundation’s Transforming Grant Reporting, without a standardized data taxonomy, federal grant reporting is a mess!

    Last year, our sister organization, the Data Foundation, conducted an intensive program of research into the challenges of federal grant reporting, supported by StreamLink Software and Workiva. In December, the Foundation published its magnum opus: Transforming Grant Reporting, which recommended that the government should “replace document-based [grant reporting] forms with standardized, open data.”

    To accomplish that, we need a government-wide taxonomy, or data dictionary, which standardizes the data fields that all grantor agencies use to collect information from their grantees.

    FUTURE: If the federal government adopts a common data taxonomy for all grant reporting–grantees will enjoy a reduced compliance burden and agencies and the public will get better transparency.

    Last month, Congress took note. Reps. Virginia Foxx (R-NC) and Jimmy Gomez (D-CA) introduced the GREAT Act (H.R. 4887), which will require the government to create the necessary taxonomy, and then require all the agencies to use electronic data, formatted consistently with that taxonomy, to collect information from their grantees. The House Oversight Committee unanimously passed the GREAT Act on February 6th, sending it to the full House of Representatives.

    Now, thanks to this week’s announcement, it’s clear that the White House is keen to take on the challenge of standardizing grant data, even in advance of a mandate from Congress.

    Here’s what the White House intends to do.

    First, working with the Department of Education and the Department of Health and Human Services, the White House will standardize the “Core Data Elements” that are used in reports that grantees submit to “a significant number of agencies.” This should be complete by the end of Fiscal Year 2018, or September 30, 2018. Details are on page 5 of the White House’s grants management Action Plan.

    Second, the White House will figure out how to govern and maintain the new taxonomy. The White House intends to complete this step by the same deadline: September 30, 2018.

    Third comes the hard part. The White House will “[D]evelop and execute [a] long-term plan for implementing data standards government-wide.” That means forcing all the grantor agencies to collect reports from their grantees in electronic data, formatted consistently with the taxonomy. The Action Plan announces no deadline for this crucial third step.

    Alongside these steps, the White House intends to create a common solution for Single Audit Reporting and build a tool to help agencies manage grant risk (page 6 of the Action Plan).

    Finally, once grant reports have been transformed into standardized data, and once new tools have been built to utilize that data, the White House will lead all grantor agencies to manage their grant programs based on risk (page 7 of the Action Plan).

    We are excited that the White House has put itself on a pathway to transforming all federal grant reporting.

    We won’t let our leaders off the hook, of course; we’ll still work to convince Congress to pass the GREAT Act right away, so that the transformation won’t just be a White House plan but a legal mandate.

    We know the road will be long. If the federal grant system were one company, it would be, by far, the world’s largest, with over $600 billion in annual revenue.

    But for the same reason, automating grantees’ compliance burden and bringing system-wide transparency for agencies and the public is too good an opportunity to miss.

    **Note: Read our full summary of the President’s Management Agenda here.**


  • February 22, 2018 9:00 AM | Data Coalition Team (Administrator)

    Regulatory technology solutions, or “RegTech,” will enable automated regulatory reporting, derive insights from regulatory information, and share information on complex markets and products. Progress in RegTech has been seen in the private sector as access to quality data improves. This progress has not been mirrored in the private sector here in the United States, but the potential to improve government efficiency is not far off. Our RegTech Data Summit will be a unique opportunity to dive into the policy area and hear how RegTech solutions, empowered by the Legal Entity Identifier (LEI) and Standard Business Reporting (SBR), are transforming reporting and compliance relationships.

    RegTech solutions have been defined by The Institute of International Finance as “the use of new technologies to solve regulatory and compliance requirements more effectively and efficiently.” PwC defines it as “the innovative technologies that are addressing regulatory challenges in the financial services world,” and notes that the financial environment is “ripe for disruption by emerging RegTechs” due to the growth of automation and rising compliance costs. As such, RegTech relies on the quality of its inputs – data.

    Currently, regulatory information that is collected from regulated industries continues to be of poor quality and inconsistent. For example, the Securities and Exchange Commission has failed to police the quality and consistency of the corporate financial data it collects – which has made it much more difficult for RegTech companies to use that data to deliver insights to investors. The lack of consistent and quality data impedes the development of RegTech solutions in the United States.

    Other developed countries are showing that once their regulators collect standardized data, their RegTech industries can deliver new value. For instance, Australian software vendors used the standardized data structure to build new compliance solutions. Using these solutions, Australian companies can now comply with at least five different regulatory reporting regimes within one software environment. In 2014-15 fiscal year, SBR was saving Australian companies over $1 billion per year through automation.

    Our inaugural RegTech Data Summit on Wednesday, March 7, will explore how data standards, like SBR, can be adopted across our regulatory reporting regimes to enable RegTech to evolve into a thriving, sustainable industry and policy initiatives, such as the Financial Transparency Act (H.R. 1530), currently pending in the House of Representatives – directs the eight financial regulators to collect and publish the information they collect from financial entities in an open data form, electronically searchable, downloadable in bulk, and without license restrictions.

    The Summit will be a unique opportunity to connect with agency leaders, Congressional allies, regulated industries, and tech companies who are defining this emerging policy area.

    Attendees will have the opportunity to hear from leaders in the government and private sector. Headline speakers include: SEC Commissioner PiwowarJoe Lonsdale, Founder of Palantir and OpenGov, Partner, 8vc; Giancarlo Pellizzari, Head of Banking Supervision Data Division, European Central Bank; and Stephan Wolf, CEO, Global LEI Foundation.

    Summit-goers will see first-hand how RegTech solutions can modernize compliance across all regulatory regimes. Three simultaneous demos will take place: a Regulation demo, a Technology demo, and a Data demo.

    On the Regulation demo stage you will see the legal drafting tool LegisPro, which automates the drafting and amending process of regulations and legal materials to be displayed and redlined (like Microsoft track changes). Once regulations are natively drafted as data, instead of just as documents, regulated industries will be able to digest them, and comply with them, much faster.

    In the Technology demo, with Auditchain, you will see the company’s continuous audit and real time reporting ecosystem for enterprise and token statistics disclosure.  The continuous audit technology provides the highest levels of audit assurance through decentralized consensus based audit procedure under the DCARPE™ Protocol. The blockchain technology is revolutionizing how enterprises report to stakeholders and regulators.

    Last, but not least, the Data demo session will explore how non-proprietary identifiers, such as the LEI, will make regulatory information accessible to the public, open, and available for anyone to bulk download. Dozens of agencies around the world, including the U.S. Commodity Futures Trading Commission and the Consumer Financial Protection Bureau, now use the LEI to identify regulated companies.

    All in all, Summit attendees will hear from over 25 RegTech experts who are moving the chains in this new tech space, both domestically and internationally.

    The Summit will explore four key themes:

    1. Why RegTech solutions require data standardization.
    2. How regulatory agencies can maximize the promise of such solutions by coordinating changes in Regulations, Technology, and Data.
    3. Why old-fashioned document-based disclosures impedes higher quality data.
    4. Explore how, in the long-term, regulatory agencies can create an entirely new paradigm for RegTech by embracing exiting regimes like Standard Business Reporting (SBR).

    If you’re a data enthusiast or eager to learn how RegTech solutions can solve the challenges facing our financial regulators or regulated entities, join us March 7 at the Capital Hilton to learn how policy initiatives like the Financial Transparency Act and SBR will deliver changes, create a new RegTech industry, and generate new opportunities to apply emerging technologies like blockchain.

    This article contains excerpts from Donnelley Financial Solutions’ upcoming white paper, How Data will Determine the Future of RegTech.

    [/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section]


  • October 16, 2017 9:00 AM | Data Coalition Team (Administrator)

    Last week the U.S. Securities and Exchange Commission proposed its first expansion of open corporate data in nearly nine years.

    Here’s where the new proposal came from, what it means, and why it matters.

    For Financial Information: a Phantom Menace

    As longtime Data Coalition supporters know, the SEC was an early adopter of open data. In February 2009, the agency finalized a rule that requires all U.S. public companies to report their financial statements using the eXtensible Business Reporting Language (XBRL) format.

    In theory, as the CFA Institute explained last month, XBRL should have allowed companies to automate some of their manual compliance processes. And XBRL should have made corporate financial information easier for investors, markets, and the SEC itself to absorb.

    But it hasn’t gone well.

    As a new report from our sister organization, the Data Foundation, explains, the SEC chose an overly-complex structure for public companies’ financial statements. Worse, the complex structure doesn’t match the accounting rules that public companies must follow (see page 22). As a result, it is unnecessarily hard for companies to create these open-data financial statements, and for investors and tech companies serving them to absorb such data.

    These difficulties have led some in Congress to conclude that the SEC shouldn’t require most companies to file financial statements as open data at all. This is the wrong reaction. The problem isn’t open data; the problem is the way the SEC has implemented it.

    Fortunately, the SEC is taking steps to fix its financial statement data. Last year, the agency signaled it may finally upgrade from the current duplicative system, in which companies report financial information twice (once as a document, then again as XBRL), and replace it with a single submission that is both human- and machine-readable. And the recommendations of the Data Quality Committee (see page 37) show how the complex structure could be demystified.

    And then there’s last week’s announcement.

    For Non-Financial Information: A New Hope

    Financial statements are an important part, but only a part, of the information public companies must file with the SEC. Companies also report their basic corporate characteristics, stock structures, executive compensation, subsidiaries, and much more. Partly because its XBRL mandate for financial statements has been a failure, the agency never got around to transforming any of this non-financial information into open data.

    For the last decade, companies have continued to report their non-financial information – even information that could easily be structured, like tables and check marks – as plain, unstructured text, in electronic documents that "mimic yesterday's paper ones." 

    Here is the cover page of Google’s annual SEC report. It should be expressed as open data. It isn’t. But change is coming.

    Last year, the Data Coalition filed a comment letter recommending that the SEC should replace all these documents with data. We recommended (on pages 2 and 12) that the SEC should begin with the “cover pages” of the main corporate disclosure forms. Cover pages include every company’s name, address, size, public float, and regulatory categories.

    The information reported on cover pages is very basic and fundamental for U.S. capital markets – and yet it is not available as structured data, creating surprising inefficiencies.

    To create a list of all well-known seasoned issuers, for instance, the Commission and investors must manually read every filing, employ imperfect text-scraping software, or outsource those tasks to a vendor by purchasing a commercially-available database. (Page 2.)

    Last November, the SEC signaled a change. In a report on modernizing non-financial disclosure, the agency recognized that adopting standardized data tags for cover page information “could enhance the ability of investors to identify, count, sort, and analyze registrants and disclosures” (page 22). And the report cited the Data Coalition’s comment letter as its inspiration for this (footnote 90).

    And last week, the SEC made it official, in a formal rule proposal on Wednesday, October 11. The agency proposed to begin requiring public companies to submit all cover page information as standardized, open data (page 105).

    Over the next two months, the SEC will collect public comments on this change from the financial industry, the tech industry, transparency supporters, and other interested parties. Then, it will decide whether to finalize the proposed rule.

    For the Future of Corporate Disclosure Information: the Force Awakens

    By transforming the basic information on the cover pages into open data, the SEC can unleash powerful analytics to make life easier for investors, markets, and itself.

    Software will be able to identify companies of particular sizes, or regulatory characteristics, automatically, from the SEC’s public, freely-available data catalogs. There will no longer be any need to "manually read every filing, employ imperfect text-scraping software, or purchas[e] a commercially-available database."

    Of course, the cover pages are only the beginning.

    Beyond the financial statements, whose slow transformation began in 2009, and the cover page information, whose transformation is starting right now, the SEC’s corporate disclosure system is full of important information that should be expressed as open data, but isn’t. We recommended a strategy for tackling the entire disclosure system in 2015.

    Where to begin? The recommendations of the SEC’s Investor Advisory Committee are a good place to start.

    The more information is turned from documents into data, the more analytics companies, like our Coalition members idaciti and Morningstar and Intrinio, can turn that data into actionable insights for investors (and for the SEC itself).

    The SEC’s open data transformation has been grindingly slow. But last week’s announcement shows it isn’t dead – and in fact is full of promise. We’ll keep pushing.


  • July 28, 2017 9:00 AM | Data Coalition Team (Administrator)

    This week, I joined three outside experts to co-author a paper addressing a unique opportunity in the federal government, titled Data Powered Leadership Reform: A Business Case for Federal Operational Improvements Enabled by Quality Data. Senior federal leaders are currently responding to a rare policy opportunity to address persistent structural management challenges in federal agencies.

    Responding to the President’s Government-wide Agency Reorganization Plan

    This March, the Presidential Executive Order on a Comprehensive Plan for Reorganizing the Executive Branch was issued. The government-wide executive order provides much needed political cover to tackle fundamental challenges and addresses a plan to bring “efficiency, effectiveness, and accountability” to executive agencies. The resulting directive from the Office of Management and Budget (OMB) (Comprehensive Plan for Reforming the Federal Government and Reducing the Federal Civilian Workforce, M-17-22) presents the roadmap for ambitious federal leaders to dramatically alter how business is conducted across the federal government.

    In this management order, OMB requires each agency to assess internal business line functions by considering factors like duplication, essentiality, appropriateness of federal ownership (vs. State, local, or private-sector), cost-benefit considerations, efficiency and effectiveness, and customers service goals (see the Table on page 6 of M-17-22). Each agency then owes OMB reorganization plans this fall as part of their FY 2019 budget request. We will start to see the results detailed publicly when the President releases the FY 2019 budget request in February 2019. According to OMB:

    The Government-wide Reform Plan will encompass agency-specific reforms, the President’s Management Agenda and Cross-Agency Priority Goals, and other crosscutting reforms. The final reforms included in the Government-wide Reform Plan and the President’s FY 2019 Budget should be reflected in agency strategic plans, human capital operating plans, and IT strategic plan. Agencies will begin implementing some reforms immediately while others will require Congressional action. (see item 7 on page 5 of M-17-22)

    If your organization provides products, services, or solutions to the federal government, then you need to be tracking this process. The following graphic breaks down the timeline in detail.

    See page 5 of M-17-22.

     

    Focusing on Quality Operational Data is the First Step

    Our paper, summarized on Nextgov, highlights a fundamental challenge in leading complex, human-powered bureaucratic systems – inadequate operational or material data. We believe that such considerations need to be a fundamental part of this government-wide reorganization process.

    Our paper starts by defining the business case for such reforms, and puts this in context of senior agency officials’ daily workflows. We walk through ten specific management challenges such as structural complexity, management feedback loops, the importance of citizen engagement, and the crucial role of political oversight.

    Common to all of these business cases is the issue of poor data; both operational (i.e., mission agnostic data that represent the resources, decisions, transactions, outputs, and outcomes of work) and material (i.e., mission specific data that represents persons, places, and things).

    Of course, both operational and material data must also be of high quality to be useful. Which means it must be accurateconsistent, and controlled (see this 2016 White House open data roundtable briefing paper as well as the CIO.gov Open Data Principles).

    For example, the DATA Act represents an incredibly valuable government-wide operational data set.

    If we recognize how funding is a common factor of every federal program, then we can see how money flows through the federal agencies, accurately, consistently, and comprehensively. Then, we  illuminate an accurate picture of how the government functions  (more here). This is the true value of the DATA Act.

    If we focus on building out accurate, consistent, and controlled data, we can start to fix the structural conditions and help federal leaders champion tangible reforms.

    Specific Recommendations for this Administration That Don’t Require Legislation

    This Administration is providing the environment to accomplish this. But it will require diligence, ingenuity, and coordinated political willpower to achieve any success.

    That is why we encourage the primary reliance on high quality data in government-wide management. It is something leaders can immediately agree on while leverage existing efforts.

    Our paper provides the following recommendations:

    1.OMB should adopt the DATA Act Information Model Schema (DIAMS) as the primary government-wide operational data format to align various agency business functions. With over 400 unique data elements the DAIMS represents the most comprehensive and unified schema of federal operations in US history. The DAIMS links budget, accounting, procurement, and financial assistance datasets that were previously segmented across agency systems and databases.

    • OMB should rely on the DAIMS’s open documentation architecture which allows for ready expansion and linkage to other administrative datasets.
    • OMB’s required Annual Performance and Annual Financial Report processes should be modernized in a machine-readable, DAIMS aligned schema.
    • In accordance with the DATA Act’s Section 5 vision for a grant reporting modernization and the work completed by the HHS DATA Act Program Management Office pilot project, OMB should create a centralized grant reporting process to extend the DAIMS’s ability to track post-award federal spending.

    2.OMB should adopt and seek to codify the governance body of the National Information Exchange Model (NIEM) and encourage the schema’s use as the primary government-wide material data format to facilitate inter-agency and state-local records exchange around shared missions.

    • The NIEM project, currently administered voluntarily by DHS, manages the expansion of community-based schema governance processes (there are currently fourteen specific domains including human services, justice, emergency management, etc.).  In coordination with the data standardization work of GSA’s US Data Federation (an outgrowth of the Data.gov effort) and Project Open Data, NIEM stands poised to foster a base of standardized material data to inform the natural harmonization of common mission data within agency environments. 

    3.OMB’s initiative to adopt a government-wide Technology Business Model (TBM) taxonomy, to enable standardized federal technology investment data, should be celebrated.

    • As referenced in the Fiscal Year 2018 budget request, OMB should build upon the DAIMS as they integrate the TBM within the context of the annual Capital Planning and Investment Control (CPIC) process.

    The outlined recommendations are just a starting point for how the Administration, Congress, and federal agencies can truly modernize! I strongly encourage all stakeholders to get behind these crucial data initiatives.


  • July 14, 2017 9:00 AM | Data Coalition Team (Administrator)

    On June 26th, our DATA Act Summit set records: our best-attended event ever (738 registrations), with the highest number of speakers we’ve ever featured (66, including six Members of Congress) and the most exhibitors we’ve ever hosted (twenty-five).

    But the really important number is 3.85 trillion – the number of U.S. federal dollars spent in 2016 and now tracked and published as open data. The DATA Act’s main deadline finally arrived last May, every federal agency began reporting spending using the same government-wide data format, and the Treasury Department combined their submissions into a single, unified open data set, for the first time in history.

    At this fourth annual DATA Act Summit, we no longer had to point to the future and predict the ways open spending data would benefit government and society. The future had come and the benefits were all around us – a world of new ways to visualize, analyze, and automate information about how taxpayers’ money is used.

    But we are never going to do this again.

    Last month’s DATA Act Summit, presented by Booz Allen Hamilton, was the final one.

    Here’s what we learned, and why we will never again host another DATA Act Summit.

    For government management, this new data set is the center of everything.

    Who’s using the new data set? SBA CFO Tim Gribben, acting DHS CFO Stacy Marcott, NRC CFO Maureen Wylie, and HHS Deputy Assistant Secretary Sheila Conley, to name a few.

    Congress may have passed the DATA Act unanimously out of a desire to deliver transparency to American taxpayers. But the real beneficiaries of the law’s mandate for agencies to standardize and publish their spending information are the agencies themselves.

    Under the DATA Act, the Treasury Department created a single data structure, the DATA Act Information Model Schema, or DAIMS, that brings budget actions, account balances, grants, contracts, and loans into a single view. The DAIMS is the first, and only, multi-agency, multi-function data structure in the entire government and currently tracks over 400 unique data elements.

    Now that they’ve taken the time and investment to translate their disparate, far-flung spending compilations into the DAIMS, agencies can now visualize and analyze their finances in new ways.

    In just one day, we learned that Department of Homeland Security leaders intend to use the new data set to target which areas of the vast agency need the most human capital investment – because, for the first time, they can see salary spending by sub-agency and by account. The Nuclear Regulatory Agency will use the new data set to compile its Congressional budget request. And a Health and Human Services IT executive predicted that her department will be able to immediately understand the full scope of resources devoted to combating a large-scale event, like an epidemic.

    For the first time, governmentwide executives at the White House have a single, unified view of the scores of billions of dollars spent on software and systems. “The importance of being able to describe that cost cannot be overstated,” said acting federal Chief Information Officer Margie Graves.

    Inspectors general at every agency are revving up their data analytics operations – because the new data set gives them a new source for indicators of waste, fraud, and abuse. So is Congress, said Sen. Rob Portman, chairman of the Senate Permanent Subcommittee on Investigations: “We’ll use this data. We’re happy to have it.”

    And more uses are coming! The winners of last April’s DATA Act Hackathon showed how the new data set can be used for evidence-based policymaking, tracking the localized impact of grants, scrutinizing procurement, and groundbreaking analytics.

    The more the DAIMS is expanded, the more data is put into this unified view, and the more useful the data set will become. Congress must amend the DATA Act to require the DAIMS to go into more granular detail about grants and contracts – right now, only summaries of each award are part of the structure.

    The Treasury Department has shown us the best way to run government-wide projects.

    The Treasury Department’s all-female DATA Act implementation team, led by Deputy Assistant Secretary Christina Ho (not pictured), delivered the first-ever government-wide picture of federal spending – on time and under budget.

    Presidential administrations as far back as Jefferson's have been demanding a single, “consolidated” view of the federal government’s finances – so that “every member of Congress, and every man of any mind in the Union, should be able to comprehend them, to investigate abuses, and consequently to control them.”

    The DATA Act provided a mandate for the creation of this single view, using a government-wide data standard and a requirement for every agency to follow it.

    But it fell to a small team at the Treasury Department, led by Deputy Assistant Secretary Christina Ho, to design the DAIMS, educate CFOs’ offices on how to translate disparate spending information into that common standard, and help all of them meet the May 2017 deadline – mostly without any extra funding.

    Ms. Ho and her team succeeded beyond expectation. The project “was on time, it was under budget, and it delivered on its promise. Not many government projects can say that,” said GSA Technology Transformation Service commissioner Rob Cook.

    How did they do it? The Treasury team, assisted by specialists from the General Services Administration’s 18F tech development group, conducted the first-ever government-wide agile project. Instead of designing the DAIMS all at once, Treasury "produce[d] successive versions of the schema that incorporate[d] regular feedback from experts across the various communities.”

    Fiscal Assistant Secretary David Lebryk, to whom Ms. Ho reports, compared the DATA Act project favorably to Treasury’s 2013 roll-out of the Governmentwide Treasury Account Symbol (GTAS) account reporting system. “We were able to do something in six months that took us four years using a traditional design process—at a fraction of the cost,” Lebryk said.

    Next to be transformed? Grantee reporting.

    This is the future of federal grantee reporting.

    The first version of the DATA Act introduced in Congress in 2011 was bolder than what finally became law in 2014.

    The original bill would not just have standardized federal agencies’ spending information. It would have transformed the whole ecosystem of federal grantee reporting, too. The 2011 proposal would have set up a governmentwide data structure to modernize reporting by grant recipients.

    The final law stepped back from this vision – and, instead, set up a pilot program to test whether standardized data might help grantees reduce their compliance burden. The pilot program, conducted by the Department of Health and Human Services, ended last May, and the White House Office of Management and Budget is going to issue a report to Congress next month to say whether data standardization is a good idea.

    At the DATA Act Summit, three panels of experts on grantee and nonprofit compliance told our audience that the grantee reporting ecosystem needs governmentwide data standards.

    Kerry Neal, deputy director of the Environmental Protection Agency’s grants office, shared a vision of “seamless integration” from grant application, to award, to disbursement, to performance reporting. Today, federal grantees are subject to a hailstorm of duplicative reporting requirements, each involving expensive manual compliance processes. If the government adopted a common data structure for all those reports, software could automate this burden.

    Grantee reporting is the next frontier of data standardization – and the discussions at the Summit laid the foundation we’ll need to get it done.

    The DATA Act Summit will never happen again.

    Booz Allen Hamilton Vice President Bryce Pippert, lead sponsor, closes the Summit.

    So why end a good thing?

    Because the DATA Act is done. Thanks to the hard work of advocates in Congress and visionaries in the executive branch, standardized and open data is now the centerpiece of the federal government’s financial management.

    The work of data standardization is not done. The DAIMS must be expanded to include more of the government’s management information, beyond the basic spending operations it currently covers so well.

    As we campaign for future legislative reforms to expand the DAIMS, our programming will expand, too. Expect events focusing on spending and performance data, spending and grant reporting data, spending and oversight data.

    At the Data Coalition, we've got to keep on moving! Thank you for joining us on this journey.


  • May 09, 2017 9:00 AM | Data Coalition Team (Administrator)

    Today, for the first time in history, the U.S. federal government’s spending information is one single, unified data set.

    Under a deadline set by the DATA Act of 2014, today every federal agency must begin reporting spending to the Treasury Department using a common data format. And Treasury has published it all online, in one piece, offering a single electronic view of the world’s largest organization.

    Until today, different types of federal spending information were all tracked in different ways and reported to different places. Agencies reported their account balances to Treasury, budget actions to the White House, contracts to GSA, and grants to the Awards Data System.

    But today, these agencies are reporting all of this information to a new database at Treasury, and Treasury is reporting it to you.

    Until today, if you wanted to view the federal government’s account balances, you would have to file a Freedom of Information Act request with every agency. Even if you did that, you wouldn’t be able to figure out which grants and contracts were paid from which accounts.

    But today, every agency is linking its accounts, budget actions, grants, and contracts together, showing which grants and contracts are paid from where. Here's an interactive picture of it all. And here's the data set, ready to download. Try it!

    Why does this matter?

    In 1804, President Thomas Jefferson wrote to his Treasury secretary, Albert Gallatin, that the government’s finances had become too complex for Congress to understand – allowing spending and debt to rise out of control.

    Jefferson hoped that the scattered “scraps & fragments” of Treasury accounts could be brought into “one consolidated mass,” easier to understand, so that Congress and the people could “comprehend them … investigate abuses, and consequently … control them.”

    Jefferson’s goal was not fully realized, not until today.

    This is what Thomas Jefferson told his Treasury Secretary to create.

    Congress and the White House continued to track spending by appropriation and budget, while federal agencies developed their own complex accounting methods. In 1990, federal agencies began publishing regular financial statements, summarizing all their accounts, but not providing detail. In 2006, then-Senator Barack Obama and Senator Tom Coburn passed a law to publish, online, a summary of every federal grant and contract.

    Even after the reforms of 1990 and 2006, these records of accounts, budgets, grants, and contracts all remained segregated from one another, and could not be connected into “one consolidated mass” – not until today.

    Today’s data set brings all that information together in one piece, and links it. We can see how budget actions, account balances, and grant and contract awards all relate to each other.

    Starting today, we can finally run data analytics across the whole government, all agencies, to illuminate waste and fraud. (In Washington, federal leaders got a first taste of this at the first-ever DATA Act hackathon, two weeks ago.)

    Starting today, we can track the economic impact of Congress’ spending decisions, because we can finally match laws Congress passes to the grants and contracts that are awarded under those laws.

    Starting today, the federal government can operate as one enterprise, the way private-sector companies do, because its dozens of agencies’ thousands of financial systems are all speaking the same language.

    Last month, former Microsoft CEO Steve Ballmer announced that he had invested $10 million and years of effort into USAFacts.org, a new attempt to create one picture of government spending. Ballmer’s team had to combine – manually – budget information from the White House, financial statements from the Federal Reserve, and state and local sources. USAFacts.org didn’t even try to integrate grant and contract details; there was no way to link them.

    If Ballmer had just waited a month, they would have found much of their work – at least the federal part – already done, in the new data set.

    The data set isn’t perfect (much more on that later), but it really is “one consolidated mass.”

    How did this happen?

    Six years of legislating, lobbying, courage, coding, and cajoling – that’s how.

    First came the legislating. In June 2011, Congressman Darrell Issa and Senator Mark Warner introduced the DATA Act. Their goal? “Standardizing the way this information is reported, and then centralizing the way it’s publicly disclosed,” said Warner.

    Issa and Warner were right: data standards were, and are, the key to transforming the chaos of federal spending into “one consolidated mass.” If federal agencies all used the same data format to report their different kinds of spending information, then it could all be brought into one picture.

    But the data format didn’t exist. Issa and Warner proposed to require the executive branch to create one.

    The DATA Act earned early support in the House, where Issa chaired the Oversight Committee, but went nowhere in the Senate. Data standardization was not the first issue on most Senators’ minds.

    Then came the lobbying. In 2012, I resigned from Rep. Issa’s Oversight Committee staff to start what was then called the Data Transparency Coalition, the first, and still only, open data trade association. Our first mission: rally tech companies to support the DATA Act.

    Tech companies have plenty of self-interest to support reforms like the DATA Act. As the government starts publishing its information in standardized formats, analytics software gets a lot more valuable.

    Still, the Coalition didn’t grow very fast. The payoff for our efforts – a unified data set covering all federal spending – was years in the future (today!), and so were most of the business opportunities. Our member companies were signing up to support a long-term vision, which isn’t a natural use for marketing budgets.

    We hosted our first DATA Act Demo Day, then our second. Sarah Joy Hays came on board and pulled off a spectacular first-ever open data trade show, Data Transparency 2013, with credentials and keynotes and exhibit booths and everything – then four more.

    Thanks to Warner’s persistence, support from the Sunlight Foundation and civil society, and our new tech-industry push, things began to happen in the Senate. Sen. Rob Portman signed on as a cosponsor and the crucial Homeland Security and Governmental Affairs Committee started to get interested in data standardization.

    But courage would be required, especially Warner’s.

    Behind the scenes, the Obama White House did its best to sink the bill. This was surprising. President Obama was a strong public supporter of open data in government. His Open Data Policy directed all federal agencies to standardize and publish all their information as open data.

    But his White House Office of Management and Budget wasn’t on board. OMB didn’t want the challenge of standardizing all spending information, nor did OMB want anyone else to do the job. OMB recommended changes to the DATA Act that used nice words but would have gutted its mandate.

    But Warner stood up to the White House. He rejected the proposed changes and kept the bill strong.

    A few months later, both chambers of Congress unanimously passed the DATA Act. And on May 9, 2014, three years ago today, President Obama signed it into law, very quietly.

    With the law on the books, a coding countdown began. The Treasury Department had one year to come up with a common data format for government spending information – the chaotic, fractured financial, grant, and contract details spread across thousands of systems that had never before been coordinated.

    Treasury also had to figure out how, exactly, agencies would deliver their data using that common format. Nobody had ever before created a system like what was needed.

    Most government management laws die like this: Congress passes a law and issues some celebratory press releases. The White House, or GSA, or Treasury sets up committees and procedures to do the work. But the work turns out to be hard and complicated, and nobody in the administration really wants to do it – they’re acting because Congress told them to. As soon as Congress’ attention moves on to other topics, the bureaucrats write reports pretending the work has been done. Or, better yet, the project is combined with another one, it changes ownership several times, and the law’s original goals are gradually forgotten.

    The DATA Act avoided this fate – largely because of one person.

    At Treasury, Deputy Assistant Secretary Christina Ho had already been trying to standardize spending data. (Christina was the first to find the Jefferson letter I quoted earlier, in fact.)

    Once the DATA Act became law, she was put in charge of implementing it, and she made up her mind that this time would be different.

    Christina assembled a team that shared her ambition and understood why we needed a unified data set covering all spending. They got to work.

    Christina’s team created the data format: the DATA Act Information Model Schema, or DAIMS, which defines the common data fields of federal spending and shows how they related to one another.

    They did this work in the open, in public, using the GitHub coding platform to take suggestions from the whole world and show their choices. Nothing like this had been done in government before.

    They announced the DAIMS on May 8, 2015, one day before the deadline. That triggered a second countdown: all agencies had to report spending data by May 9, 2017.

    And to help agencies deliver their information, Christina recruited the 18F technology development center at the General Services Administration. 18F built the DATA Act Broker, a piece of open-source software that collects and validates spending data from every agency. They built it using Agile methodology, with constant testing and revision.

    Here is the code of the DATA Act Broker; download it if you want.

    Nothing like this had been done in government before either.

    But coding wasn’t enough. The DATA Act’s supporters outside the government, and Christina’s team inside, had to do a great deal of cajoling.

    Even with the DAIMS providing a standard structure for all government spending information, and a DATA Act Broker easing the process, the law didn’t really have teeth.

    There were no penalties for agencies that don’t report standardized spending data. And OMB made it clear that the Obama administration didn’t really care if they did, or didn’t.

    OMB couldn’t, or wouldn’t, create a list of the agencies required to comply. OMB tried to claim that most of the DAIMS wasn’t really required by the law – in order to shut it down later. OMB insisted on a weaker DAIMS than Treasury wanted, in which financial information comes right from source systems, but grant and contract information doesn’t.

    With a lack of leadership from the White House, we had to push agencies toward compliance in other ways.

    First, a few agencies started to realize that standardizing their spending information would make their own work easier, and so we celebrated them at our events.

    The Small Business Administration was the first, and best. Chief Financial Officer Tim Gribben used the DAIMS to visualize which SBA grants were being paid from which of its accounts, and plot them on a map. This would have required a bunch of data calls before the DATA Act. Now, it was automatic.

    In 2015, over 600 people participated in our DATA Act Summit and saw demonstrations of what leaders like Tim were doing. Ditto in 2016.

    Second, Congressional committees stayed involved, instead of moving on. The House Oversight Committee held four hearings focusing on the DATA Act. Behind the scenes, we stayed in touch with committee staff and Members, delivering intelligence and describing the law’s long-term vision.

    Every year, we brought tech companies to Capitol Hill to remind Congress why the DATA Act was important.

    Members of Congress publicly rebuked OMB for slow-walking the DATA Act, and told the agencies they’d celebrate compliance.

    Rep. Mark Meadows even did his own DATA Act software demonstration – on our stage. Members of Congress don’t usually do demos.

    Third, we worked to spread the word about the DATA Act’s benefits to the people who’d have to do the work – especially federal financial management professionals, who’d have to report the data, and inspectors general, who’d have to audit it.

    In 2016 we founded the Data Foundation, a new nonprofit research organization. Its first piece of research, The DATA Act: Vision & Value, which we co-published with MorganFranklin Consulting, told federal agencies why the DATA Act mattered.

    The cajoling worked. Not every agency is going to make today’s deadline, but almost all of them will – and even the worst ones are submitting partial reports.

    And we’ll keep cajoling until all reports are in.

    What comes next?

    The data set is live. Now, it sure had better get some use! If the data set is used for antifraud analytics, internal management, and public transparency, especially by the federal agencies themselves, its quality will get better and better.

    At next month’s fourth annual DATA Act Summit, we’ll highlight the agencies, tech companies, and coders who are doing the most amazing things with this new resource. We’ll celebrate the winners of last month’s DATA Act hackathon too.

    We’re not out of the woods yet.

    Last week, the Data Foundation’s new report with Deloitte, DATA Act 2022, described the six main challenges to the DATA Act’s success. We need to spend the next five years dealing with those.

    What are the challenges? The most serious is that DATA Act reporting is running alongside old-fashioned, non-standardized reporting. Agencies still have to report the same information using documents and non-standardized legacy databases like the FPDS, even as they comply with the new DATA Act mandate.

    As long as that happens, there’s a danger that agencies will see the legacy databases as the main system, and the DATA Act as an add-on.

    Congress needs to kick the stool out from under this duplication, and direct the government to make the DATA Act the main, and eventually the only, way that spending is reported. DATA Act 2022 explains how.

    The second-most-serious is that the government continues to use the DUNS Number to identify grantees and contractors. The DUNS Number is owned by Dun & Bradstreet. Dun & Bradstreet has a monopoly, protected and profitable, on spending data. Until that monopoly is broken, the private sector won’t be able to take full advantage of the data set.

    Passing the DATA Act and getting agencies’ spending data took six years. Fully realizing its vision will take many years more.

    But every moment has been worth it. Every moment will be worth it. A unified federal spending data set makes our democracy better, in so many ways.

    Today, we thank the Data Coalition’s members and Data Foundation’s supporters, without whom none of our work would have been possible.

    And today, we celebrate Darrell Issa, Mark Warner, Christina Ho, Tim Gribben, and all the other leaders who caught Jefferson’s dream of a single, unified federal spending data set, and didn’t let go.



1100 13TH STREET NORTHWEST SUITE 800
WASHINGTON, DC, 20005, UNITED STATES
INFO@DATAFOUNDATION.ORG

RETURN TO DATA FOUNDATION

Powered by Wild Apricot Membership Software