Log in

Blog

  • March 08, 2019 9:00 AM | Data Coalition Team (Administrator)

    When President Trump signed the Foundations for Evidence-Based Policymaking (FEBP) Act (P.L. 115-435) in January, the Data Coalition celebrated a major milestone for open data legislation in the federal government. Title II of the law, the Open, Public, Electronic, and Necessary (OPEN) Government Data Act, is a transformative open data policy that modernizes the way the government collects, publishes, and uses non-sensitive public information. The law mandates that all non-sensitive government data assets be made available as open, machine-readable data under an open license by default. The Data Coalition advocated for this legislation for over three years and it is now law. So, what next?  

    The Data Coalition, Center for Data Innovation (CDI), and the American Library Association hosted a joint panel to discuss the OPEN Government Data Act’s impact on the future of open data in the United States. The Coalition’s own Senior Director of Policy, Christian Hoehner, as well as representatives from BSA | The Software Alliance, the Internet Association, SPARC, and the Bipartisan Policy Center, discussed what this new law means for government modernization, data-centric decision making, and the implementation of successful federal open data initiatives.

    Congressman Derek Kilmer (D-WA-6), an original sponsor of the OPEN Government Data Act and Chairman of the newly established House Select Committee on Congressional Modernization, provided opening remarks that touched upon the bills wide-reaching benefits as well as the future of open data policy and implementation. Congressman Kilmer touted the new law’s potential to create more economic opportunity for people in more places. Greater access to government data will allow Americans to start new businesses, create new jobs, and expand access to data and resources often concentrated in urban areas.

    The Congressman emphasized that the law passed with strong bipartisan support. He observed that the simple notion of giving taxpayers additional access to public data is beneficial for citizens.

    “Simply put, the OPEN Government Data Act gives the data the government collects to the people who pay for it, which is all of you,” Congressman Kilmer said during his remarks. “The bill passed because people across the increasingly wide political spectrum know that making access to government data…is a good idea.”

    Opening valuable government data assets will increase government accountability and transparency as well as technological innovation and investment. Under the law’s mandates, public government data will be made machine-readable by default without compromising security or intellectual property. This is a major victory for the open data movement, but as Congressman Kilmer acknowledged, this is certainly not the last step. There is still much to be done to ensure the law is successfully implemented.

    While the OPEN Government Data Act draws a lot of its mandates from already established federal orders – specifically President Obama’s 2013 “Open Data Policy-Managing Information as an Asset” (M-13-13) – the law adds additional weight and formalizes a number of open data best practices. The law legally establishes definitions for “open license,” “public data asset,” and “machine-readable,” which clarify the specific assets under scrutiny in this law. As Nick Shockey of the Scholarly Publishing and Academic Resources Coalition (SPARC) stated, the strong definitions behind open licensing will be great in strictly codifying data procedures around research and beyond.

    Establishing these definitions was critical as it reinforces another key part of this law: requiring the publication of public data assets as “open government data assets” on agencies’ data inventories. Under M-13-13, agencies tried to establish a comprehensive data inventory, but internal silos and other barriers prevented the publication of certain data sets.

    Nick Hart of the Bipartisan Policy Center drew from an example to explain the barriers. “The Census Bureau, for example, has some data that they received from other agencies. The other agencies may not have reported it on their inventory because they gave it to the Census Bureau, [and] the Census Bureau didn’t report it because it wasn’t their data.” Identifying the extent of government data assets has been a constant challenge for researchers and industry, but the open data inventories mandated in Title II will clarify exactly what public data assets the government has on hand.

    The open data inventories will contain all government data assets that aren’t open by default but would otherwise be available under the Freedom of Information Act (FIOA). “That’s going to, I think, make these data inventories a lot more robust and a lot more useful to researchers as they’re trying to identify what data an agency has that might not be currently made available through Data.gov," said Christian Troncoso, Policy Director at BSA | The Software Alliance.  

    Another important aspect of the OPEN Government Data Act discussed during the event was the establishment of an agency Chief Data Officer (CDO) Council. Every federal agency is now required to appoint a CDO, and these individuals will be tasked with implementing the components of the new law. One major challenge going forward will be how federal agencies establish and equip their CDO functions. That is, will they recognize that, as the law intends, the CDO function should be distinct and independently established apart from traditional information technology leadership functions. The hope is that CDOs will build communities of practice and work with their fellow members of the Council to share best data practices that could be implemented across agencies.

    In the end, CDOs will not just be Chief Information Officers under a different name. They will be the sentinels of quality, accurate and complete agency data and, hopefully, shift the culture to one of data management and data-driven decision making. As our Senior Director of Policy, Christian Hoehner said, better education about how data will be utilized by agencies and the public will help to incentivize agencies to ensure compliance with the law.

    The panel served as a celebration of how far open data has come and judging by the opinions of all panelists, the government is on track to continue expanding its open data initiatives. The Federal Data Strategy and President's Management Agenda show that the government recognizes the value of its vast stores of data. These initiatives will run parallel to the implementation of the OPEN Government Data Act, and in some cases, they will also intertwine.

    These executive efforts, including a recent Executive Order on U.S. artificial intelligence, and bipartisan congressional support show that open data utilization is a government priority, and our Coalition is pleased to see action by the Executive Branch, Congress, and agencies.

    While there is still plenty of work to be done once the law is officially enacted on July 8, 2019,  the passage and support of the OPEN Government Data Act is an indication that the government is moving closer to modernizing its processes and management using standardized, open data. The Data Coalition looks forward to working with the government on successfully implementing this transformative legislation.

    If you would like to watch the panel, “What’s Next for Open Data in the United States,” click here.


  • November 15, 2018 9:00 AM | Data Coalition Team (Administrator)

    Earlier this year, the Administration released their vision for the way our government can better collect and leverage data in four main categories:

    1. Enterprise Data Governance.
    2. Access, Use, and Augmentation.
    3. Decision Making & Accountability.
    4. Commercialization, Innovation, and Public Use.

    The Federal Data Strategy will define principles and practices for data management in the federal government. The Office of Management and Budget (OMB) is collaborating with the private sector, trade association, academia, and civil society to gather feedback and comments on the proposed strategy.

    The Data Coalition joined the Bipartisan Policy Center (BPC) and OMB to co-host a public forum discussing the Federal Data Strategy, last week (November 8th). The second in a series on the strategy, this forum allowed the public, businesses and other stakeholders to comment on the recently published draft set of practices. Data Coalition members DeepBD, Elder Research, Morningstar, SAP, Tableau, and Xcential as well as Data Foundation supporter companies Kearney and Company and REI Systems provided feedback on the strategy of the proposed practices. In their comments, members emphasized the value of federal data as applied in analytics, a standards-based modeling path, and the use of machine-readable forms that would create a better link between government services and citizens.

    Most commenters acknowledged that the Federal Data Strategy strategy represents an effort to initiate much-needed changes to the cultures around data across agencies and offered ways to improve the practices and implementation. Some attendees emphasized the need for greater clarity on the draft practices and provided examples of how the government can maximize the value of its data. Clarity and direction, they argued, would help move the strategy from an idea to a potential set of actionable steps for cultural changes.

    Better data utilization and management were was noted as a key to the success of the strategy. The Digital Accountability and Transparency Act (DATA Act) has significantly increased the quality of the data reported to the government. Our members who provided public statements were quick to bring attention to these improvements and how the DATA Act set the groundwork to fortify potential efforts to reach CAP Goals 2 (Data as a Strategic Asset) and 8 (Results-Oriented Accountability for Grants Management).

    According to Sherry Weir of Kearney & Company, if OMB starts with a budget request in the same standardized data format, the U.S. Treasury and agencies could then merge DATA Act data reported information (USAspending.gov) with federal appropriations bills. This connection is only possible with intelligent data stewardship, but it has the ability to connect otherwise disparate datasets across the budget lifecycle and provide insights that can motivate better more informed federal spending and policymaking.

    Throughout the day, a few commenters expressed concern over the complexity of the current draft strategy. They pointed out that the strategy, laid out across forty-seven practices and organized into ten principles, is too unwieldy for executive decision makers to readily articulate across their organization’s. The MITRE Corporation suggested that the strategy could be cut down to a single-page reference document and provided an example.

    It would be no simple task to distill the strategy. Panelists suggested that the Federal Data Strategy Team looks for small wins in data modernization efforts to build momentum on the larger goals.

    Larger conclusions presented by commenters included a view that the strategy fails if public servants cannot work across agency data silos to make better, data-driven management decisions that best serve the public.

    Stewardship is key to the success of the Federal Data Strategy, and the Administration needs sustained leadership to guide it in order to create to most value out its vast stores of data. With the feedback of all these industry leaders, advocates, and data experts, OMB is now tasked with using the public perspective to build a data strategy that facilitates efficient government data management.

    The Data Coalition was thrilled to partner with BPC and OMB on this important forum. Audio recordings of the forum are available online as well as social media coverage of the event. As a reminder for interested parties, public comments on the updated Federal Data Strategy are due by Friday, November 23. Comments can be submitted online in various forms. The Data Coalition will be providing its own written comments on the Federal Data Strategy that will we hope the Administration strongly considered when forming the final strategy.


  • September 07, 2018 9:00 AM | Data Coalition Team (Administrator)

    Modernizing financial regulatory reporting is no easy task. Regulators and regulated entities continue to rely on outdated technology and reporting systems. Open data standards are the key: by replacing document-based reports with standardized data, regulators can spur modernization.

    Data companies behind the Data Coalition have the solutions to make financial regulation more efficient and transparent for both regulatory agencies and the industry. Standardized data can be instantly analyzed, updated, and automated.

    On September 12, 2018, we brought together current and former Treasury and SEC officials, global reformers, and industry experts to explore the ongoing shift in financial regulatory reporting from documents to data — and its profound benefits for both regulators and regulated.

    Modernizing Financial Regulatory Reporting: New Opportunities for Data and RegTech” was an opportunity for attendees to better understand how data standards will enable new RegTech and blockchain applications for the financial industry at our first-ever New York City event. The event informed and encouraged key stakeholders to move forward with financial regulatory reporting reforms.

    The half-day gathering at Thomson Reuters headquarters in Times Square highlighted the modernization initiatives already underway and looked to the future. Here is a glimpse of what attendees learned throughout the event:

    Open data standards are becoming the norm – Thomson Reuters is leading the charge, Washington is making moves

    Open PermID Linked Data Graph. Source: https://permid.org/

    Thomson Reuters has moved away from the traditional model of charging for a standard, signaling the growth of the open data ecosystem. Rather than selling the standard itself, financial services leaders view the data and analysis on financial entities and instruments as a value-add service for clients. Thomson Reuters developed the Open PermID, which exemplifies open data developments in financial services, and is in line with the open-data movement happening in Washington.

    The PermID has created an efficient, transparent, and systematic market. As Thomson Reuters  states:

    The ID code is a machine-readable identifier that provides a unique reference for data item. Unlike most identifiers, PermID provides comprehensive identification across a wide variety of entity types including organizations, instruments, funds, issuers, and people. PermID never changes and is unambiguous, making it ideal as a reference identifier. Thomson Reuters has been using PermID in the center of our own information model and knowledge graph for over seven years.

    Open data standards like PermID, used for financial instruments, and the Legal Entity Identifier (LEI), can provide meaningful changes in the way financial data is collected and synthesized. Both the PermID and LEI are examples of how scalable open standards can improve internal efficiency and external transparency for financial regulatory reportingWhile the private sector develops viable use cases, policymakers in Washington are taking action to drive modernization in financial regulatory reporting.

    State of Financial Regulatory Reporting

    In Washington, three key policy milestones have occurred over the past 18 months, which demonstrates that agency officials, the Administration, and Congress are driving modernization.

    July 16, 2018: House Financial Services Committee ultimately did not include an anti-SEC data measure in their House passed JOBS & Investor Confidence Act  — a package of thirty-two bipartisan job creation bills. The Small Company Disclosure Simplification Act (H.R. 5054) was ultimately not included in the compromised JOBS Act 3.0 package remains a controversial measure lacking broad support in the Committee.

    June 29, 2018: The SEC voted to adopt Inline XBRL for corporate financial data disclosure (see the final rule). The move to Inline XBRL will end duplicative documents-plus-data financial reporting and transition to data-centric reporting. This initiative is a part of a broader modernization of the SEC’s entire disclosure system. The Data Coalition and its member companies (see comments from Workiva, Deloitte, Morningstar, and Grant Thornton) have long been supporting the adoption of Inline XBRL a the SEC. The Coalition’s comment letter further explains our support of the SEC’s decision to adopt the use of iXBRL (see our comment letter).

    June 13, 2017: Treasury Secretary Steven Mnuchin testified before the House Appropriations Committee in defense of the Treasury Department’s Fiscal Year (FY) 2018 Budget request. Mnuchin’s testimony showed an opening to standardize data fields and formats across the nation’s overlapping financial regulatory regimes – just as the Data Coalition has already been recommending to Congress.

    Regulators need standards for accuracy, analysis, and fraud reduction

    Financial regulators rely heavily, in some cases solely, on corporate filings and data analytics to detect fraud schemes – including the practice colloquially known as ‘Pumping and Dumping’. This scheme attempts to inflate the price of a stock through recommendations based on false, misleading, or greatly exaggerated, statements. Corporate financial filings are essential to accurately identifying and extracting fraudulent activities, ultimately protecting investors.

    If financial regulators adopted data standards across all reporting systems, it would make identifying fraud far easier. That is why our Coalition is working to persuade Congress to pass the Financial Transparency Act (H.R. 1530) (summary here). The FTA would require the eight major financial regulators to adopt common data standards for the information they collect from the private sector. The bill has gained thirty-two bipartisan cosponsors. When passed, it will be the nation’s first RegTech law.

    Most notable “pump and dump” scheme include: ZZZZ Best Inc., Centennial Technologies and Satyam Computer Services.

    Why the financial industry should embrace open data standards

    During the final panel of the day, attendees heard financial industry experts describe why their colleagues should get behind open standards, and why the financial industry should welcome regulatory action.

    Currently, financial entities rely on mostly manual reporting processes as they send information to government regulators. Under this outdated system, companies typically have one group who is responsible for preparing and another who is responsible for issuing the reporting. For larger companies, error is nearly inevitable in such a structure that is heavily reliant on layers of human oversight.

    Data standardization means lower compliance costs for the financial industry, more efficient enforcement for regulators, and better transparency for investors – but only if regulators work together to modernize.


  • August 20, 2018 9:00 AM | Data Coalition Team (Administrator)

    Guest blog by Robin Doyle, Managing Director, Office of Regulatory Affairs, J.P. Morgan Chase & Co.

    In May 2018, J.P. Morgan Chase published an article on the topic of data standardization, "Data Standardization - A Call to Action." The article called for the financial services industry, global regulators, and other stakeholders to make progress on addressing current deficiencies in financial data and reporting standards that would enhance the usability of the financial data for informational needs and risk management, as well as with innovative technologies like artificial intelligence and machine learning.

    The article called for both global and national regulators to review the state of data and reporting standardization, and to take action to make improvements within these areas. Within the United States, the need for such a review is urgent. The U.S. regulatory reporting framework is fragmented and there is a lack of coordination across agencies resulting in reporting requirements that are often duplicative and overlapping. Each agency is focused on collecting data in its own way, with its own definitions. This leads to higher cost for financial institutions to manage compliance and poorer quality/comparability of data for both regulators and firms.  

    Specifically, whenever new data or a report is requested with slight differences in definitions or granularity, it triggers a new reporting process, including reconciliations to other reports and U.S. GAAP numbers, as well as obtaining corresponding sign-offs and attestations. The lack of common data standards and reporting formats across the agencies make reporting complex and incomparable. Separate supervisory and examination processes occur as a result of the multi-agency, multi-reporting framework. Here are two areas that highlight the issues at hand:

    1. Top Counterparty Reporting: There are multiple reports that collect information about a firm’s top counterparties including the Office of the Comptroller of the Currency (OCC) Legal Lending Limit Report, Financial Stability Board (FSB) common data collection of Institution-to-Institution Credit Exposure Data (e.g., Top 50 Counterparty Report), Federal Financial Institutions Examination Council (FFIEC) 031 (i.e., bank “Call Report”), new Fed Single Counterparty Credit Limit Top 50 Report, and others. Each of these reports has a slightly different scope and use different definitions for aggregation of exposures resulting in significant work to produce the overlapping reports and explain the differences in the reported results.1

    2. Financial Institutions Classification: There are numerous reporting requirements – regulatory capital deductions (e.g., FFIEC 101 Schedule A & FR Y-9C, Schedule HC-R), risk-weighted assets (e.g., Asset Value Correlation calculation FFIEC 101 Schedule B), systemic risk (e.g., Federal Reserve’s Banking Organization Systemic Risk Report—FR Y-15 Schedule B), and liquidity (e.g., Complex Institution Liquidity Monitoring Report FR 2052a), among others – that aggregate and report data with the classification “Financial Institution,” each using a different definition of “Financial Institution.” While on the surface this may not seem complicated, the reality is firms have full teams of people who parse data across these different definitions to ensure reporting is done correctly and can be reconciled. In a large firm, efforts to create tagging systems to automate the parsing process can take years and multiple, additional headcount to implement.2

    The U.S. regulatory community is aware of this – in the commentary from the recent Y-15 information collection rule, the Federal Reserve acknowledges the conflict but does not address the burden:

    One commenter noted that the definition of ‘financial institution’ in the FR Y-15 is different from other regulatory reports and recommended aligning the varying definitions. In response, the Board acknowledges that its regulations and reporting sometimes use differing definitions for similar concepts and that this may require firms to track differences among the definitions. Firms should review the definition of ‘financial institution’ in the instructions of the form on which they are reporting and should not look to similar definitions in other forms as dispositive for appropriate reporting on the FR Y-15.

    These issues could be addressed through the use of common data and reporting standards across the agencies. The Financial Stability Oversight Council (FSOC) could take steps within its mandate to facilitate coordination among its member agencies towards the standardization of regulatory reporting requirements across the agencies.3

    The FSOC could initiate a review of the current state of data and reporting within the U.S. to identify overlapping and duplicative reporting requirements and opportunities to move from proprietary data standards to national and global standards. Based on the review, a roadmap could be established to address the issues and gaps identified. Innovative approaches to data collection, such as using single collections, could be established that are then shared among agencies and global reference data should be implemented in all cases where it exists. Further, mechanisms could be created to ensure better coordination among agencies in the process of rulemaking to avoid duplication and to leverage consistent, established data standards.

    The benefits of such improvements would be substantial. Better standardization of regulatory reporting requirements across the agencies would significantly improve the ability of the U.S. public sector to understand and identify the buildup of risk across financial products, institutions, and processes.

    Reducing duplication, streamlining reporting, and using data standards would lead to efficiency, saving time, and reducing costs that firms and regulators otherwise expend manually collecting, reconciling, and consolidating data. According to the U.S. Department of the Treasury’s Office of Financial Research (OFR), the estimated cost to the global industry from the lack of data uniformity and common standards runs into the billions of dollars.4

    Looking forward, having good quality, standardized data is an important stepping stone to reaping the benefits of the ongoing digitization of financial assets, digitization of markets and growing use of new, cutting-edge technologies, such as artificial intelligence. Many areas of the financial industry will be impacted, in some capacity, by these innovations in the coming years. These areas may include customer service, investment advice, contracts, compliance, anti-money laundering and fraud detection.

    We urge the U.S. regulatory community to heed this call to action.


    1. Links to report references: https://occ.gov/topics/credit/commercial-credit/lending-limits.html; https://www.fsb.org/policy_area/data-gaps/page/3/; https://www.fsb.org/wp-content/uploads/r_140506.pdf; https://www.newyorkfed.org/banking/reportingforms/FFIEC_031.html; https://www.federalreserve.gov/reportforms/formsreview/FR2590_20180620_f_draft.pdf)
    2. Links to referenced reports: https://www.ffiec.gov/forms101.htm; https://www.federalreserve.gov/reportforms/forms/FR_Y-1520170331_i.pdf; https://www.federalreserve.gov/reportforms/forms/FR_2052a20161231_f.pdf 
    3. Dodd-Frank Wall Street Reform and Consumer Protection Act, Sec. 112 (a)(2)(E)
    4. Office of Financial Research: Breaking Through Barriers Impeding Financial Data Standards (February, 2017).


  • August 07, 2018 9:00 AM | Data Coalition Team (Administrator)

    Last week, the Data Coalition responded to the newly released Federal Data Strategy which we summarized in a blog two weeks ago.

    The Federal Data Strategy is an outgrowth of the President’s Management Agenda, specifically Cross Agency Priority Goal #2 – Leveraging Data as a Strategic Asset, which is co-lead by the Office of Management and Budget (OMB), the Office of Science and Technology Policy (OSTP), the Department of Commerce (DOC), and the Small Business Administration (SBA). Administration officials within these agencies called for public feedback and are currently working through the responses. We expect to see more detailed plans between October and January 2019 (see page 11 of the recent action plan).

    Our response provided high-level commentary on the draft principles as well as six proposed use cases that the Administration could potentially work into the prospective Data Incubator Project.


    Commentary on Draft Principles:

    The Federal Data Strategy proposes 10 principles spread across three categories: StewardshipQuality, and Continuous Improvement.

    Overall, we emphasized the benefits of assuring federal data assets are in open and machine-readable formats that impose uniform and semantic structure on data, thus mitigating organizational uncertainties and expediting user development. We also discussed the importance of pursuing data standardization projects that identify common data elements across organizations and enforce standards.

    For Stewardship, it is important to ensure that data owners and end users are connected in ways that assure data is presented in useful ways and that data quality can be continuously improved.

    With regards to Quality, it is important to establish policies to assure core ‘operational’ and ‘programmatic’ data assets are accurate, consistent, and controlled. We note simply that it is at the point of ingestion that any data standards or quality thresholds should be enforced. As a starting place for data strategy principles, we recommend incorporating the open data the principles identified by the CIO Council’s Project Open Data.

    And finally, for Continuous Improvement, we recommend that data should be made available in open formats for bulk download by default. This allows for maximum stakeholder engagement from the beginning.


    Six Proposed Open Data Use Cases:

    We also propose the following six use cases for the Administration to work on:

    Use Case 1: Fix Public Company Filings (Access, use, and augmentation)

    The Securities and Exchange Commission (SEC) requires public companies to file financial statements in standardized XBRL format, but the standard has complications. Currently, the format allows for too much custom tagging, inhibiting the goals of comparability and transparency. The Administration should work with the SEC and the Financial Accounting Standards Board (FASB) to ensure that the U.S. Generally Accepted Accounting Principles (US GAAP) taxonomy enforces FASB rules as the true reference for all elements in the taxonomy, thus eliminating unnecessary tags, reducing overall complexity, and minimize the creation of extension data elements. This will ultimately improve comparability and data quality.

    Use Case 2: Documents to Data in Management Memorandum (Decision-making and Accountability)

    Congress has already taken on the challenge of adopting a data standard for laws and mandates via the United States Legislative Markup (USLM), which provides a framework for how the Administration can transform federal documents into open data. The Administration should publish federal management guidance in integrated, machine-readable data formats instead of documents. This will allow agencies to better understand how policies integrate with each other and thus work to comply more readily, and allow the public and Congress to better understand the specific factors guiding and constraining agency programs and leadership.

    Use Case 3: Entity Identification Working Group (Enterprise Data Governance)

    Currently, the federal government uses a variety of different codes to identify companies, nonprofits, and other non-federal entities, which makes matching data sets across federal agencies a time-consuming and expensive undertaking. Adoption of the Legal Entity Identifier (LEI) as the default identification code for legal entities will enable agencies to aggregate, compare, and match data sets critical to their regulatory and programmatic missions.

    Use Case 4: Mission Support or Operational Data Standards Coordination (Decision-Making and Accountability)

    Treasury and the Office of Management and Budget (OMB) have spent over four years working to establish and integrate the DATA Act Information Model Schema (DAIMS), which links budget, accounting, procurement, and financial assistance datasets – operational data – that were previously segmented across federal agencies. The Administration should utilize the DAIMS for modernizing the annual budget process, agency financial reporting, and agency performance reporting, thus allowing for easy use of data to compare, justify, and plan budget goals and agency spending.

    Use Case 5: Mission or Programmatic Data Standards Coordination (Enterprise Data Governance; Decision-Making and Accountability; Access, Use, and Augmentation)

    To build a common approach to multi-agency programmatic data sharing, the Departments of Homeland Security and Health and Human Services created the National Information Exchange Model (NIEM), which maintains a data dictionary of common fields allowing agencies to create formats using those fields. The Administration should consider endorsing NIEM as the government-wide default for programmatic data standardization and publication projects. This will afford agencies the easier path of reusing common data fields of the NIEM Core, rather than building their own data exchanges and reconciliation processes.

    Use Case 6: Establish A Standard Business Reporting Task Force to Standardize Regulatory Compliance (Enterprise Data Governance; Access, Use, and Augmentation)

    Standard Business Reporting (SBR), which has been fully implemented in Australia, demonstrates that regulatory agencies can reduce the compliance burden on the private sector by replacing duplicative forms with standardized data, governed by common data standards across multiple regimes. The Administration should convene a task force representing all major U.S. regulatory agencies to create a roadmap for standardizing the data fields and formats that they use to collect information from the private sector. While the full implementation of a U.S. SBR program would require a multi-year effort, the creation of an exploratory task force would put the policy barriers and necessary investments into scope.

    Other Organization’s Feedback Echo an Open Data Approach

    While the responses have not yet been made public in a central portal, we have gathered a few of the key submissions.

    The Bipartisan Policy Center (BPC) has issued two separate comment letters. The first letter, on behalf of the former leadership of the Commission on Evidence-Based Policymaking, summarizes the Commission’s recommendations. Their second letter summarizes recommendations made by the BPC coordinated Federal Data Working Group, which the Data Coalition works with. Here we have joined a call to clarify the guidance from the 2013 open data Executive Order (M-13-13) (e.g., define “data asset” and renew the focus on data inventories), leverage NIEM to develop data standards, look into harmonizing entity identifiers across agencies, explore preemptive implementation of the Foundations for Evidence-Based Policymaking Act (H.R. 4174), which includes the OPEN Government Data Act, and to define terminology for types of public sector data (i.e., similar to our comment’s demarcation between operational and programmatic data).    

    The Center for Data Innovation (CDI) think tank also provided feedback that calls for the administration to support the passage of the OPEN Government Data Act as “the single most effective step” the administration could take to achieve the goals of the Federal Data Strategy. Additionally, CDI calls for improvements to data.gov’s metadata, for OMB to establish an “Open Data Review Board” for incorporating public input in prioritizing open data projects, and for the Administration to establish “data trusts” to facilitate sharing of non-public data. Lastly, they make the point to consider how the Internet of Things (IoT) revolution and Artificial Intelligence (AI) should be included in the conversation.

    The data standards organization XBRL-US recommends that the Administration “require a single data standard for all financial data reporting…to establish a single data collection process,” adopt the Legal Entity Identifier for all entities reporting to the federal government and use automated validation rules to ensure data quality at the point of submission.

    The new State CDO Network sent a letter emphasizing the important role of State and local governments. They wrote,“[States are] in the unique position of creating and stewarding data based on federal requirements,” while calling for a formal plan to leverage administrative data to address fraud, waste, and abuse.

    The Preservation of Electronic Government Information (PEGI) Project calls for an advisory board to make recommendations on data management and stewardship while echoing our call to utilize the open government data principles and also incorporate the FAIR lifecycle data management principles. PEGI also calls for scalable and automated processes for maximizing the release of non-sensitive data on data.gov.

    Lastly, the American Medical Informatics Association (AMIA) identifies the publication and the harmonization of data dictionaries across agencies as two fundamental activities. They also call for collecting and creating information in ways that support “downstream information processing and dissemination,” establish a framework to help agencies implement a “portfolio approach” to data asset management, and for the Administration to extend the concept of “data as an asset” to information produced by federal grant recipients and contractors.

    The Data Coalition will be working with these groups and others to align the Administration’s efforts to establish a pragmatic, sustainable, and effective Federal Data Strategy.


  • March 30, 2018 9:00 AM | Data Coalition Team (Administrator)

    The inaugural RegTech Data Summit’s thesis was that regulatory rules, technology, and data must be modernized in a coordinated fashion. If all three areas are modernized in tandem, new RegTech solutions will flourish, reducing reporting duplication, minimizing reporting errors, and enabling automation.

    When Regulation, Technology, and Data intersect – change happens.

    Over 400 participants, 37 speakers, three live technology demos, and over 20 exhibitors agreed: we were right.

    Throughout the day, we looked at the state of regulatory reporting regimes, solutions that exist, and what could be improved by modernization – “What is?” and “What if ?”

    Here are my top 10 moments:

    1. SEC: RegTech can “make everyone’s lives easier”
    2. XBRL has an image problem
    3. A vision for a universal, non-proprietary identifier
    4. Demo: Achieving an open law vision
    5. Demo: Blockchain for continuous audits
    6. Demo: SBR is happening down under; businesses and government are both saving
    7. Silicon Valley Keynote: Private-public collaboration is key
    8. A coming convergence of regulation, technology, and data
    9. Looking ahead: What does the future look like for RegTech?
    10. The numbers

    Let’s dive into each of these movements!

    1. SEC keynote: RegTech can “make everyone’s lives easier”

    From left to right: Yolanda Scott Weston, a principal at Booz Allen Hamilton and Michael Piwowar, Commissioner, SEC

    SEC Commissioner Michael Piwowar kicked off the RegTech Data Summit. The Commissioner outlined his definition of RegTech:

    “[C]overs the use of technology by regulators to fulfil their duties in a more thorough and efficient manner…. RegTech also refers to the use of technology by regulated entities to streamline their compliance efforts and reduce legal and regulatory costs. Most importantly, the term covers collaboration between private and public actors to take advantage of existing technologies to make everyone’s lives easier.”

    • Watch SEC Commissioner Piwowar’s keynote address.

    2. XBRL has an image problem

    From left to right: Leslie Seidman, Former Chair, Financial Accounting Standards Board and Giancarlo Pellizzari, Head, Banking Supervision Data Division, European Central Bank

    • What is? Currently, the SEC has a dual reporting regime–HTML filing and XBRL filing. That’s burdensome! For over three years, the Data Coalition has been advocating for the SEC to move away from this duplicative reporting system. Leslie Seidman, former FASB Chairman, noted that “XBRL is suffering from a image crisis… most companies view this as a burden that’s deriving them no benefit whatsoever.”
    • What if? Seidman went on to recommend how advocates of structured data should describe its benefits to corporate America,

    “[S]how [corporations] the extent of manual effort compared to an automated process using XBRL data–that alone by combing the processes… you will clearly be saving money because you would have one group who is responsible for preparing and issuing those financial report… Talk candidly and influentially about these potential risks and costs that exist now, and how the iXBRL solution will actually reduce their risk and cost.”

    • Our Coalition strongly supports the Financial Transparency Act (FTA) (R. 1530(summary here), currently pending in the U.S. House, to direct the SEC to replace its duplicative documents-plus-data system with a single submission, both human- and machine-readable. The bill has 32 bipartisan cosponsors. When passed, it will be the nation’s first RegTech law.

    3. A vision for a universal, non-proprietary identifier

    • What is? Regulatory agencies use (approximately) 18 different identifiers to track the entities they regulate; companies maintain internal IDs; and proprietary ID systems are fatally flawed. There is no ID to rule them all. The hodgepodge of identifiers impedes technological solutions, frustrates compilers and enforcers, and wastes everyone’s time.

    4. Demo: A vision for machine-readable regulation

    • What is? Member company Xcential demoed how Congress is moving away from static documents to adopt open data standards. Xcential is helping theClerk of the House and the Office of the Law Revision Counsel create and apply an open data format to legislative materials; the project is known as the S. House Modernization project. Xcential’s software can natively draft and amend bills using the XML-based U.S. Legislative Model. Other projects Xcential is working on include the U.K. Legislative Drafting, Amending, & Publishing Programme, which is publishing rules, regulations, orders, directives, proclamations, schemes, and by-laws (bye-laws) in open data formats, fully machine-readable!

    • What if? If laws, regulations and other legal materials were all published in open formats, machine-readable regulation will improve compliance, reduce human effort, and shrink compliance costs. Legislation to keep an eye on: The Searchable Legislation Act (SLA) (R. 5143); The Statutes at Large Modernization Act (SALMA) (H.R. 1729); The Establishing Digital Interactive Transparency (EDIT) Act (H.R. 842).

    5. Demo: Blockchain for continuous audit

     

     

    • What is? Audits are burdensome, costly, and time consuming, and provide only a once-a-year or once-a-quarter picture of an organization’s finances.
    • What if? Auditchain co-founder Jason Meyers and director of assurance and XBRL architecture Eric Cohen demoed a blockchain-based approach to audits that focuses on a continuous stream of an organization’s transactions, instead of annual or quarterly reports. Summit audience saw a fully integrated continuous audit and reporting ecosystem for traditional and decentralized enterprises. The audit data combines a standardized transaction model on the back end and XBRL on the front end for dynamic, customizable reports for stakeholders.

     

     

    6. Demo: SBR is happening down under; businesses and government are both saving

    • What is:

    • What if? Matt Vickers of Xero outlined what the benefits would be for the United States if Standard Business Reporting (SBR) was adopted, “the economy is 15 times larger, and the tax system and regulatory complexity is comparable. The steps that need to be taken to ensure SBR is successfully implemented include: “1. Developing a single taxonomy and 2. Engaging early with software companies.”
    • The Data Coalition continues to push for legislation to replace financial regulatory documents with open data and support a longer-term move towards SBR in the United States. Our sister organization, the Data Foundation, explained how this might work in a report last year, co-published with PwC: Standard Business Report: Open Data to Cut Compliance Costs.

    7. Silicon Valley Keynote: Private-public collaboration is key

    • Joe Lonsdale, co-founder of Palantir and OpenGov, delivered a comepling keynote address on how our government and Silicon Valley can partner to improve the way government collects and publishes regulatory information. Here’s a snippet:

    “It is possible for a team of enterpurers to very meaningfully impact government… I don’t think these things get fixed by insiders. It’s just not how the world ever works. It is always outsiders partnering with allies in the inside and figuring out how to adopt technology that’s going to upgrade all these processes.”

    • Joe announced the founding of a new startup, Esper, which will work with regulatory agencies to automate the rulemaking process. Watch his keynote address here!

    8. A coming convergence of regulation, technology, and data


    • What is? Francis Rose, host of Government Matters, moderated the Convergence Panel, which featured insights that panelists had learnt throughout the day, and brought the day’s theme together: regulation, technology, and data must be modernized in a coordinated fashion to enable RegTech solutions. Panelists agreed this is “no easy task.”
    • What if? Panelist Adam White of GMU said it best when he described what needs to happen for regulation, technology, and data to be modernized: “Agencies need to be brought together in a collaborative way … that would benefit immensely from standardized data and more transparency, allowing agencies to work on a common basis of facts across the board.”

    9. Looking ahead: What does the future look like for RegTech?

    • What is? More than 200 regulators on the local, state and federal levels have disparate systems. The regulators continue to collect document-based filings, rather than using existing RegTech solutions to collect the information as open, standardized data. And they continue to issue regulations as documents, rather than exploring machine-readable regulation.

    From left to right: Steven Balla, GMU, Jim Harper, Competitive Enterprise Institute (former), and Sarah Joy Hays, Data Coalition

    • What if? Steven Balla of GMU said THREE things need to happen to transform regulatory information collection and rulemaking from documents into data: 1. “Agency leaders need to position themselves and their organization as innovators. We can underestimate the importance of allies with agencies; 2. We need the relevant authorities in the Executive Branch to have the coordination function, specifically OMB’s Office of Information and Regulatory Affairs; 3. [and,] finally, leadership on Capitol Hill. There is nothing more effective than a law or budget to move organizational behavior.”
    • Jim Harper formerly of the Competitive Enterprise Institute got right to the point: “To get all agencies on the same data standards there is on one hand, shame, and then there is political imperative.”

    10. The numbers


  • March 23, 2018 9:00 AM | Data Coalition Team (Administrator)

    This week the White House endorsed a data-centric approach to modernizing and restoring trust in government. For data companies and data transparency, the newly-unveiled President’s Management Agenda (PMA) does not disappoint.

    Where did this agenda come from?

    A year ago the White House issued an Executive Order, the Comprehensive Plan for Reorganizing the Executive Branch, and a corresponding government-wide reform plan (see M-17-22). Our prior blog on the reform plan implored the Administration to make operational and material data a central focus in modernizing government management.

    With the release of the PMA, that is what the White House has done.

    The PMA’s Fourteen Goals: A Performance Agenda Grounded in Law

    As a whole, the PMA should be read in concert with the President’s Fiscal Year 2019 budget request and the corresponding agency 2018-2022 strategic plans. However, thanks to the Government Performance and Results Act (GPRA) Modernization Act of 2010 (P.L. 111-352), which established Performance.gov, you do not need to laboriously parse out the individual goals from these reams of disconnected documents.

    Instead, the PMA is broken down into fourteen discrete Cross-Agency Priority (CAP) Goals, representing the GPRA Modernization Act’s requirement for the executive branch to “identify major management challenges that are Governmentwide or crosscutting in nature and describe plans to address such challenges.”

    The unique quality of these CAP Goals is that they are “long[-]term in nature.” In GPRA, Congress designed the concept of “agency priority goals” to span Presidential transitions. In the law, “cross-agency goals” are on a four year lifecycle with a requirement that they be established a full year after a new President takes office (see Sec. 5). We saw the benefits of this structure throughout 2017, as the previous Administration’s “Open Data” CAP Goal empowered agency leaders to keep pursuing data reforms through the first year of the new Administration’s transition (see the 2014-2018 goals archived here).

    Each CAP Goal names specific leaders who will be accountable for pursuing it. This accountability helps motivate progress and insulate from politics.

    Driving the PMA: “an integrated Data Strategy”

    With the PMA, the White House is putting data and data standards at the center of federal management. This matches our Coalition’s prior recommendations, and is good news for data companies and data transparency.

    The PMA identifies three overarching “drivers” of transformation: first, a focus on the government’s systems with IT Modernization (Goal 1: Modernize IT to Increase Productivity and Security); second, an integrated strategy around Data Accountability, and Transparency (Goal 2: Leveraging Data as a Strategic Asset); and third, improved Workforce management (Goal 3: Developing a Workforce for the 21st Century).

    These three drivers, and the three CAP goals that correspond to them, intersect with the PMA’s eleven other CAP goals (see image).

    The White House’s decision to clearly separate IT systems from data (and data standards) is the right approach. The government’s data can be standardized and made more useful, and more transparent, without requiring major system changes.

    Therefore, the Data Coalition applauds the PMA’s central focus on the government’s need for “a robust, integrated approach to using data to deliver on mission, serve customers, and steward resources”–a focus that will now guide this Administration.

    Last July we made three recommendations for the PMA along these lines. We are pleased to see all three recommendations reflected in the final product.

    First, we recommended that “OMB should adopt the DATA Act Information Model Schema (DAIMS) as the primary government-wide operational data format to align various agency business functions.” That’s exactly what Goal 2 of the PMA now does.

    The “Access, Use, and Augmentation” strategy for Goal 2 “will build on work like the DATA Act Information Model Schema (DAIMS)” (see page 16 of the PMA) and “promote interoperability, data standardization, and use of consensus standards, specifications, metadata, and consistent formats” (page 8 of the action plan). This syncs with the Treasury Department’s recently-released Strategic Plan, which states that ”[the DAIMS] can be expanded to include other administrative data and link more domains across the federal enterprise…to support decision-making and provide metrics for evaluating program performance and outcomes” (see page 30). The budget request backs this up with potential increased funding for the Treasury’s Bureau of the Fiscal Service which would have resources for “continued operational support for execution of the [DATA Act]” (see pages 21-22).

    Second, we recommended that the Administration leverage the work of the National Information Exchange Model (NIEM) for data governance work and information exchange across the government. If you read the PMA’s Goal 2 together with the 2019 budget request, you will find this recommendation validated as well.

    The “Enterprise Data Governance” strategy for Goal 2 calls for “develop[ing] a coordinated approach to managing communities of stakeholders in the Federal community and among external constituents” and better coordination of “existing governance bodies” (see page 7 of the action plan). Additionally, the 2019 budget request’s analytical perspective on “Building and Using Evidence to Improve Government Effectiveness” calls for the “development of interoperable data systems, which can communicate and exchange data with one another while maintaining the appropriate privacy and security protections” as “critical to realiz[ing] the full potential of shared administrative data.” The budget request goes on to praise NIEM as a model “data exchange at all levels of government across program areas…in partnership with private industry stakeholders and state/local partners” (see page 5 of the analytical perspective).

    Third, we supported the continued implementation of the Technology Business Model (TBM), a private-sector framework that helps organizations standardize data classifying technology investments, and recommended alignment with the DATA Act’s DAIMS.

    In the PMA, TBM is listed alongside the DAIMS in Goal 2 (see page 16 of the PMA) and names the DATA Act as a supporting program in Goal 10: Federal IT Spending Transparency (see page 10 of the action plan).  

    The PMA’s Other Goals: Grants Accountability, Paperless Forms, and (Maybe) Guidance Documents Expressed as Data

    Across the PMA’s other CAP Goals, we see a consistent data-centric approach and continued alignment with the Data Coalition’s Policy Agenda.

    As we celebrated yesterdayGoal 8: Results-Oriented Accountability for Grants “recognizes that recipient burden (such as excessive compliance requirements) can be reduced if grant reporting data is standardized” (see page 5 of the action plan). This aligns with the objectives of the Grant Reporting Efficiency and Agreements Transparency (GREAT) Act (H.R. 4887), that we are advocating for, and is making fast progress in Congress (see more).

    Goal 4: Improving Customer Experience introduces a “Paperless Government Project,” led by the US Digital Service, which would help agencies reduce redundant and unnecessarily complex forms. The Data Coalition is pushing reforms across a number of fronts that would apply open data concepts to simplify complex regulatory reporting (for instance, Standard Business Reporting).

    And Goal 6: Shifting From Low-Value to High-Value Work seeks to establish “regular processes to assess the burden [of OMB’s management guidance] on agencies and to rescind or modify requirements over time” (see page 5 of the action plan). The way to create such processes is for OMB to publish its guidance in integrated, machine-readable data formats instead of documents.3 Our work to pursue “open data for laws and mandates” provides a use case for exactly the same transformation, starting with Congressional laws, bills, and amendments.

    Each of the CAP Goals identify the senior executives who will be accountable for delivering  These promised reforms. We commend the administration for explicitly recognizing both the executives accountable for these goals as well as the career staff who will be managing these efforts over the next four years.

    The Road Ahead

    As all this work takes shape, it will be important to remember the guiding statements which set the stage at the PMA’s launch event. Newly-appointed Office of Management and Budget Deputy Director for Management Margaret Weichert called data “a foundational asset to driving economic growth in innovation.” Incoming US Chief Information Officer Suzette Kent echoed with a call for a “strategic view of data as one of our mission critical assets.” It will be up to these new leaders to turn the PMA’s vision into a reality.

    The Data Coalition will continue to support data transparency and data standardization–which means we will work hard hold the Administration accountable to these well-stated goals.


  • March 22, 2018 9:00 AM | Data Coalition Team (Administrator)

    The White House has published a plan to transform federal grant reporting from disconnected documents into open, standardized data.

    The Data Coalition views this as a big step forward! Supported by StreamLink Software other leading data companies, we’ve been pushing for open data in grant reporting since 2013.

    Last Tuesday, as part of the long-awaited release of the President's Management Agenda, the White House announced fourteen new government-wide goals. Goal number 8 of these is “Results Oriented Accountability for Grants.”

    The White House recognizes that the government’s current system of grant reporting creates challenges for grantor agencies, grantees, and the populations they serve. Grantees must fill out complicated document-based forms to report on their receipt and use of grant funds.

    As a result, their managers report spending 40% of their time on compliance, according to a survey by REI Systems, the National Grants Management Association, and the George Washington University.

    Meanwhile, because these forms are submitted to over 2,200 separate program offices across the government, transparency is difficult. There is no easy way for agencies, beneficiaries, or the public to see a grantee’s performance across multiple programs.

    CURRENT STATE: According to the Data Foundation’s Transforming Grant Reporting, without a standardized data taxonomy, federal grant reporting is a mess!

    Last year, our sister organization, the Data Foundation, conducted an intensive program of research into the challenges of federal grant reporting, supported by StreamLink Software and Workiva. In December, the Foundation published its magnum opus: Transforming Grant Reporting, which recommended that the government should “replace document-based [grant reporting] forms with standardized, open data.”

    To accomplish that, we need a government-wide taxonomy, or data dictionary, which standardizes the data fields that all grantor agencies use to collect information from their grantees.

    FUTURE: If the federal government adopts a common data taxonomy for all grant reporting–grantees will enjoy a reduced compliance burden and agencies and the public will get better transparency.

    Last month, Congress took note. Reps. Virginia Foxx (R-NC) and Jimmy Gomez (D-CA) introduced the GREAT Act (H.R. 4887), which will require the government to create the necessary taxonomy, and then require all the agencies to use electronic data, formatted consistently with that taxonomy, to collect information from their grantees. The House Oversight Committee unanimously passed the GREAT Act on February 6th, sending it to the full House of Representatives.

    Now, thanks to this week’s announcement, it’s clear that the White House is keen to take on the challenge of standardizing grant data, even in advance of a mandate from Congress.

    Here’s what the White House intends to do.

    First, working with the Department of Education and the Department of Health and Human Services, the White House will standardize the “Core Data Elements” that are used in reports that grantees submit to “a significant number of agencies.” This should be complete by the end of Fiscal Year 2018, or September 30, 2018. Details are on page 5 of the White House’s grants management Action Plan.

    Second, the White House will figure out how to govern and maintain the new taxonomy. The White House intends to complete this step by the same deadline: September 30, 2018.

    Third comes the hard part. The White House will “[D]evelop and execute [a] long-term plan for implementing data standards government-wide.” That means forcing all the grantor agencies to collect reports from their grantees in electronic data, formatted consistently with the taxonomy. The Action Plan announces no deadline for this crucial third step.

    Alongside these steps, the White House intends to create a common solution for Single Audit Reporting and build a tool to help agencies manage grant risk (page 6 of the Action Plan).

    Finally, once grant reports have been transformed into standardized data, and once new tools have been built to utilize that data, the White House will lead all grantor agencies to manage their grant programs based on risk (page 7 of the Action Plan).

    We are excited that the White House has put itself on a pathway to transforming all federal grant reporting.

    We won’t let our leaders off the hook, of course; we’ll still work to convince Congress to pass the GREAT Act right away, so that the transformation won’t just be a White House plan but a legal mandate.

    We know the road will be long. If the federal grant system were one company, it would be, by far, the world’s largest, with over $600 billion in annual revenue.

    But for the same reason, automating grantees’ compliance burden and bringing system-wide transparency for agencies and the public is too good an opportunity to miss.

    **Note: Read our full summary of the President’s Management Agenda here.**


  • February 22, 2018 9:00 AM | Data Coalition Team (Administrator)

    Regulatory technology solutions, or “RegTech,” will enable automated regulatory reporting, derive insights from regulatory information, and share information on complex markets and products. Progress in RegTech has been seen in the private sector as access to quality data improves. This progress has not been mirrored in the private sector here in the United States, but the potential to improve government efficiency is not far off. Our RegTech Data Summit will be a unique opportunity to dive into the policy area and hear how RegTech solutions, empowered by the Legal Entity Identifier (LEI) and Standard Business Reporting (SBR), are transforming reporting and compliance relationships.

    RegTech solutions have been defined by The Institute of International Finance as “the use of new technologies to solve regulatory and compliance requirements more effectively and efficiently.” PwC defines it as “the innovative technologies that are addressing regulatory challenges in the financial services world,” and notes that the financial environment is “ripe for disruption by emerging RegTechs” due to the growth of automation and rising compliance costs. As such, RegTech relies on the quality of its inputs – data.

    Currently, regulatory information that is collected from regulated industries continues to be of poor quality and inconsistent. For example, the Securities and Exchange Commission has failed to police the quality and consistency of the corporate financial data it collects – which has made it much more difficult for RegTech companies to use that data to deliver insights to investors. The lack of consistent and quality data impedes the development of RegTech solutions in the United States.

    Other developed countries are showing that once their regulators collect standardized data, their RegTech industries can deliver new value. For instance, Australian software vendors used the standardized data structure to build new compliance solutions. Using these solutions, Australian companies can now comply with at least five different regulatory reporting regimes within one software environment. In 2014-15 fiscal year, SBR was saving Australian companies over $1 billion per year through automation.

    Our inaugural RegTech Data Summit on Wednesday, March 7, will explore how data standards, like SBR, can be adopted across our regulatory reporting regimes to enable RegTech to evolve into a thriving, sustainable industry and policy initiatives, such as the Financial Transparency Act (H.R. 1530), currently pending in the House of Representatives – directs the eight financial regulators to collect and publish the information they collect from financial entities in an open data form, electronically searchable, downloadable in bulk, and without license restrictions.

    The Summit will be a unique opportunity to connect with agency leaders, Congressional allies, regulated industries, and tech companies who are defining this emerging policy area.

    Attendees will have the opportunity to hear from leaders in the government and private sector. Headline speakers include: SEC Commissioner PiwowarJoe Lonsdale, Founder of Palantir and OpenGov, Partner, 8vc; Giancarlo Pellizzari, Head of Banking Supervision Data Division, European Central Bank; and Stephan Wolf, CEO, Global LEI Foundation.

    Summit-goers will see first-hand how RegTech solutions can modernize compliance across all regulatory regimes. Three simultaneous demos will take place: a Regulation demo, a Technology demo, and a Data demo.

    On the Regulation demo stage you will see the legal drafting tool LegisPro, which automates the drafting and amending process of regulations and legal materials to be displayed and redlined (like Microsoft track changes). Once regulations are natively drafted as data, instead of just as documents, regulated industries will be able to digest them, and comply with them, much faster.

    In the Technology demo, with Auditchain, you will see the company’s continuous audit and real time reporting ecosystem for enterprise and token statistics disclosure.  The continuous audit technology provides the highest levels of audit assurance through decentralized consensus based audit procedure under the DCARPE™ Protocol. The blockchain technology is revolutionizing how enterprises report to stakeholders and regulators.

    Last, but not least, the Data demo session will explore how non-proprietary identifiers, such as the LEI, will make regulatory information accessible to the public, open, and available for anyone to bulk download. Dozens of agencies around the world, including the U.S. Commodity Futures Trading Commission and the Consumer Financial Protection Bureau, now use the LEI to identify regulated companies.

    All in all, Summit attendees will hear from over 25 RegTech experts who are moving the chains in this new tech space, both domestically and internationally.

    The Summit will explore four key themes:

    1. Why RegTech solutions require data standardization.
    2. How regulatory agencies can maximize the promise of such solutions by coordinating changes in Regulations, Technology, and Data.
    3. Why old-fashioned document-based disclosures impedes higher quality data.
    4. Explore how, in the long-term, regulatory agencies can create an entirely new paradigm for RegTech by embracing exiting regimes like Standard Business Reporting (SBR).

    If you’re a data enthusiast or eager to learn how RegTech solutions can solve the challenges facing our financial regulators or regulated entities, join us March 7 at the Capital Hilton to learn how policy initiatives like the Financial Transparency Act and SBR will deliver changes, create a new RegTech industry, and generate new opportunities to apply emerging technologies like blockchain.

    This article contains excerpts from Donnelley Financial Solutions’ upcoming white paper, How Data will Determine the Future of RegTech.

    [/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section]


  • October 16, 2017 9:00 AM | Data Coalition Team (Administrator)

    Last week the U.S. Securities and Exchange Commission proposed its first expansion of open corporate data in nearly nine years.

    Here’s where the new proposal came from, what it means, and why it matters.

    For Financial Information: a Phantom Menace

    As longtime Data Coalition supporters know, the SEC was an early adopter of open data. In February 2009, the agency finalized a rule that requires all U.S. public companies to report their financial statements using the eXtensible Business Reporting Language (XBRL) format.

    In theory, as the CFA Institute explained last month, XBRL should have allowed companies to automate some of their manual compliance processes. And XBRL should have made corporate financial information easier for investors, markets, and the SEC itself to absorb.

    But it hasn’t gone well.

    As a new report from our sister organization, the Data Foundation, explains, the SEC chose an overly-complex structure for public companies’ financial statements. Worse, the complex structure doesn’t match the accounting rules that public companies must follow (see page 22). As a result, it is unnecessarily hard for companies to create these open-data financial statements, and for investors and tech companies serving them to absorb such data.

    These difficulties have led some in Congress to conclude that the SEC shouldn’t require most companies to file financial statements as open data at all. This is the wrong reaction. The problem isn’t open data; the problem is the way the SEC has implemented it.

    Fortunately, the SEC is taking steps to fix its financial statement data. Last year, the agency signaled it may finally upgrade from the current duplicative system, in which companies report financial information twice (once as a document, then again as XBRL), and replace it with a single submission that is both human- and machine-readable. And the recommendations of the Data Quality Committee (see page 37) show how the complex structure could be demystified.

    And then there’s last week’s announcement.

    For Non-Financial Information: A New Hope

    Financial statements are an important part, but only a part, of the information public companies must file with the SEC. Companies also report their basic corporate characteristics, stock structures, executive compensation, subsidiaries, and much more. Partly because its XBRL mandate for financial statements has been a failure, the agency never got around to transforming any of this non-financial information into open data.

    For the last decade, companies have continued to report their non-financial information – even information that could easily be structured, like tables and check marks – as plain, unstructured text, in electronic documents that "mimic yesterday's paper ones." 

    Here is the cover page of Google’s annual SEC report. It should be expressed as open data. It isn’t. But change is coming.

    Last year, the Data Coalition filed a comment letter recommending that the SEC should replace all these documents with data. We recommended (on pages 2 and 12) that the SEC should begin with the “cover pages” of the main corporate disclosure forms. Cover pages include every company’s name, address, size, public float, and regulatory categories.

    The information reported on cover pages is very basic and fundamental for U.S. capital markets – and yet it is not available as structured data, creating surprising inefficiencies.

    To create a list of all well-known seasoned issuers, for instance, the Commission and investors must manually read every filing, employ imperfect text-scraping software, or outsource those tasks to a vendor by purchasing a commercially-available database. (Page 2.)

    Last November, the SEC signaled a change. In a report on modernizing non-financial disclosure, the agency recognized that adopting standardized data tags for cover page information “could enhance the ability of investors to identify, count, sort, and analyze registrants and disclosures” (page 22). And the report cited the Data Coalition’s comment letter as its inspiration for this (footnote 90).

    And last week, the SEC made it official, in a formal rule proposal on Wednesday, October 11. The agency proposed to begin requiring public companies to submit all cover page information as standardized, open data (page 105).

    Over the next two months, the SEC will collect public comments on this change from the financial industry, the tech industry, transparency supporters, and other interested parties. Then, it will decide whether to finalize the proposed rule.

    For the Future of Corporate Disclosure Information: the Force Awakens

    By transforming the basic information on the cover pages into open data, the SEC can unleash powerful analytics to make life easier for investors, markets, and itself.

    Software will be able to identify companies of particular sizes, or regulatory characteristics, automatically, from the SEC’s public, freely-available data catalogs. There will no longer be any need to "manually read every filing, employ imperfect text-scraping software, or purchas[e] a commercially-available database."

    Of course, the cover pages are only the beginning.

    Beyond the financial statements, whose slow transformation began in 2009, and the cover page information, whose transformation is starting right now, the SEC’s corporate disclosure system is full of important information that should be expressed as open data, but isn’t. We recommended a strategy for tackling the entire disclosure system in 2015.

    Where to begin? The recommendations of the SEC’s Investor Advisory Committee are a good place to start.

    The more information is turned from documents into data, the more analytics companies, like our Coalition members idaciti and Morningstar and Intrinio, can turn that data into actionable insights for investors (and for the SEC itself).

    The SEC’s open data transformation has been grindingly slow. But last week’s announcement shows it isn’t dead – and in fact is full of promise. We’ll keep pushing.



1100 13TH STREET NORTHWEST SUITE 800
WASHINGTON, DC, 20005, UNITED STATES
INFO@DATAFOUNDATION.ORG

RETURN TO DATA FOUNDATION

Powered by Wild Apricot Membership Software