NEW YORK (Thomson Reuters Regulatory Intelligence) - In the first case involving a rule aimed at ensuring the stability of financial market operations, three NYSE exchanges have agreed to pay a fine of $14 million to settle allegations by the Securities and Exchange Commission that due to a variety of lapses from 2008 through 2016 they failed to comply with laws and regulations governing registered securities exchanges.
The three exchanges were the New York Stock Exchange LLC (NYSE), NYSE American LLC (American) and NYSE Arca, Inc. (Arca).
The SEC's action on March 6(here marks the first case based on Reg SCI(here as the basis for enforcement. The 2014 rule requires SEC-overseen entities and self-regulatory organizations such as stock exchanges to ensure that their systems are sufficiently robust and secure to maintain their operational capability and fair and orderly markets.
The case provides an opportunity to consider best practices and compliance program enhancements that businesses should examine within their own regimes and the testing that can best determine if they are effective and fit for purpose.
The SEC charged that from 2008 to 2015, the interaction between two order types on the NYSE and American exchanges — pegging orders and non-displayed reserve orders — potentially enabled floor brokers to determine the presence of non-displayed pending depth liquidity on the exchanges’ order books, without disclosing this possibility in its rules.
The violations also included erroneously implementing a market-wide regulatory halt and negligently misrepresenting stock prices as “automated” despite extensive system problems ahead of a total shutdown of two of the exchanges, the SEC said.
“For retail investors to have confidence in our markets, exchanges must provide accurate information and comply with legal requirements, including being equipped for unexpected market disruptions,” Stephanie Avakian, co-director of the SEC’s enforcement division, said in a statement.
The commission also charged that, for approximately one year after November 3, 2015, NYSE and American lacked "reasonably designed" backup and recovery capabilities as required under Regulation SCI, and that Arca failed to comply with national market system plans of which it is a sponsor or participant under Regulation NMS(here
The Regulation NMS violation involving Arca involved a certain date — August 24, 2015 — which was when the SEC said Arca applied price collars during unusual market volatility without a rule in effect to permit them.
The commission noted that such a move resulted in order imbalances being resolved more slowly and was due to Arca having in-house rules describing price collars for opening and closing auctions, but not specifying any for reopening auctions.
Thus, according to the SEC, the exchange failed to comply with its own rules regarding re-opening auctions.
NYSE spokeswoman Kristen Kaus said in response to the action: “We take our regulatory obligations seriously and remain focused on building and maintaining industry-leading technology and ensuring that our markets operate with the utmost integrity.”
Previous SEC enforcement actions against exchanges include a $14 million settlement in 2015 with BATS Global Markets(here, which is now owned by Cboe Global Markets, over charges that two exchanges acquired by BATS had given advantages to certain high-frequency trading firms.
In May 2013, the SEC penalized Nasdaq Inc.(here $10 million to settle charges stemming from mistakes made during Facebook's 2012 initial public offering.
The SEC adopted Regulation SCI(here) in November 2014 to strengthen the technology infrastructure of the U.S. securities markets by reducing the occurrence of systems issues, improving resiliency when problems occur, and enhancing the commission's oversight of the securities market's technological infrastructure.
It applies, among other entities, to SROs such as stock and options exchanges, registered clearing agencies, the Financial Industry Regulatory Authority (FINRA), the Municipal Securities Rulemaking Board (MSRB).
It requires firms to establish processes to ensure that their systems have the capacity, integrity, resiliency, availability and security adequate to maintain their operational capability and maintain a fair and orderly market.
The rule also requires SCI entities to take corrective action with respect to SCI events — such as systems disruptions, system compliance issues and system intrusions — plus notify the SEC and affected members or participants about the events.
Rule 1002(c)(1)-(2) of Reg SCI defines a major SCI event as one that has had, or would have: (1) any impact on a critical SCI system, or (2) a significant impact on the SCI entity’s operations or on market participants.
Finally, such entities must review at least annually their Reg SCI compliance protocols, submit quarterly reports regarding changes to their SCI systems and maintain accurate books and records.
Such testing must include scheduled testing of the entity’s business continuity and disaster recovery plans, including back-up systems, and to coordinate such testing on an industry- or sector-wide basis with other SCI entities.
Rule 1004 of Reg SCI does not require two separate tests of business continuity and disaster recovery (BD/DR) plans, one which includes the participation of members or participants and one that does not. Rather, the rule says that SCI entities can conduct functional and performance testing of their BC/DR plans with the participation of designated members or participants and in coordination with other SCI entities on an industry- or sector- wide basis, as long as it is not done less frequently than once every 12 months.
In addition to the best practices contained in the rules implementing Reg SCI, some practices firms should consider to demonstrate clearly their adherence to the rule include:
— EVENT NOTIFICATION. The crux of the rule deals with the responsibility of each SCI entity for ensuring its compliance with Regulation SCI, including ensuring both the timely reporting of any required SCI event notification, as well as the accuracy and completeness of such notifications.
To provide such immediate notifications, entities should consider designating personnel for the specific purpose of providing notifications once they are confirmed. In addition, firms should make sure their agreements with vendors and other business partners — including their own affiliated branches and subsidiaries — have spelled out in precise detail how they are to be notified of such occurrences.
All BC/DR documentation should be accessible as widely and stored as safely as possible, possibly in the cloud and in a variety of storage locations.
— TRAINING. Train as many appropriate persons from a variety of departments on the BC/DR plan so not one or even just two are depended upon and to get as much buy-in and accountability as possible for its implementation, updating and testing.
Well-crafted BD/DR documentation is not helpful unless people know how to execute it.
And consider training several personnel outside of the firm’s primary location on the critical aspects of BC/DR compliance, just in case that primary site is not operational after an incident strikes and the team outside the region needs to step into the main role.
— TESTING THE PLAN. Testing is also emphasized in the regulation, and it should be done with critical eyes and independent judgment, which means hiring an outside firm to conduct it. The BC/DR plan testing must be done with multiple scenarios — not just a hackers attack or power outage.
Test the plan realistically, assuming a wide range of possible disruptions and disasters, and a range of harm that could result — from moderate to critical.
Testing helps to verify to the regulator whether recovery procedures are functional, and it makes personnel familiar with the procedures so they can operate with a road map you have created, tested, and reminded them of when confronted by a crisis situation.
Plans and tests can become outdated or otherwise irrelevant, even in between routine tests. When business operations, services and processes change, the plan must change to stay in alignment with this evolving environment, and proof of these efforts taken should be collected to show the regulator.
Since most essential applications will likely be created so they reliably fall over to backup servers during a BC/DR event, the testing of all back-up systems (and, if applicable, their back-ups) must be done regularly, with particular emphasis on those back-ups’ ability to jump-start immediately.
(Julie DiMauro is a regulatory intelligence expert in the Enterprise Risk Management division of Thomson Reuters Regulatory Intelligence. Follow Julie on Twitter @Julie_DiMauro. Email Julie at firstname.lastname@example.org.)
This article was produced by Thomson Reuters Regulatory Intelligence and initially posted on Mar. 22. Regulatory Intelligence provides a single source for regulatory news, analysis, rules and developments, with global coverage of more than 400 regulators and exchanges. Follow Regulatory Intelligence compliance news on Twitter: @thomsonreuters