GoldenSource Blog
DOL fiduciary rule, SEC Rule 22e-4, build vs. buy

The Build vs. Buy Dilemma

A real example of the build vs. buy decision: to either buy your EDM solution, or build it in-house

I was talking to a CIO of a high yielding Hedge Fund recently who found himself in the classic build vs. buy dilemma. He described a need in his org around classical security master features along with reporting and position aggregating needs which were “data warehouseish” in nature. I consider him to be a tech guru and he has a strong track record of delivering on time, and he asked if he could just build it himself. .

I was able to tell him with confidence that even with the right balance of domain IP and development resources, it will take 2-3 years to get a first stable cut out. Even longer if it’s meant to be truly multi-domain (i.e. Products, positions, customers, counterparties with the hardest part being modeling the links between these). You simply can’t just throw bodies at this.

He asked how I arrived at this number. Well, we’ve been doing this for 30 years. Here’s the thought process I described:

To build it, there are largely 5 areas you can break the work up into:

  1. The data model
    • Constant regulatory scrutiny translates to ever increasing transparency and lineage requirements
    • Relationships between instruments, counterparties, clients and products have to be factored in
    • Multi-domain schemas become exponentially complex to design i.e. a counterparty of a trade, could be a client in an adjacent business, and an issuer of debt held elsewhere. This holistic view is critical for the risk management operation (I know Wall Street has a short memory, but c’mon 2008 was not that long ago!!)
  2. Publishing out to downstream systems
    • This part tends is the most proprietary and painful part of a “self-build”, which can pose a real risk to business continuity. Call it the last mile!
    • Most mature EDM’s will give you a running start to common OMS’s, EMS’s & risk platforms. Mature orgs tend to have enterprise message busses in place already, which makes this step a bit easier
    • Practically though, we see message/event based publishing along with old school file transfer protocol support. Webservices, Json feeds etc. will also come into play here
  3. Interfaces to external sources
    • TR, ICE, SX, S&P, BB, WM Daten etc. (aka connectors), and don’t forget the custodians and fund admins, catering to the various proprietary data structures and delivery mechanisms. There’s an increasing trend for multi sourcing, so without pre-built adapters this can be grinding
    • This is not a one-time mapping BTW, there is the thankless task of maintaining the interface as the data vendors add, remove & update fields, change API’s etc.
    • Think of the (IDC) SIRS to APEX or (S&P) Crosswalk to Xpressfeed BECRS transition(s). Or the fields in BBG per-security, DataLicense BackOffice and BVAL moving to other modules
    • Some data vendors roll out large updates once or twice a year, while others do this several times a week. Each of those “notices” need a thorough review and functional assessment
  4. Loading data, researching and resolving data exceptions
    • Data validation, composite record creation, data arbitrage (aka gold copy creation) and workflows around exception management
    • The more rules, the more complex the UI and software (think 4 eye, 6 eye, parallel, forked and merged workflows etc.)
    • A good EDM vendor should be able to provide guidance on data quality metrics and best practices (during implementation) having experienced this across their client base. When building your own exception management system, it’s an unfortunate process of self-discovery, at best you will replicate your old processes
  5. GUI and Visualization
    • All too often overlooked but before passing any system over to the business, is it intuitive and road tested? So many self builds fail due to poor adoption and side bar work arounds (exposing the divide between UI and UX)
    • Finally, it’s impossible to demonstrate provenance and data quality if you can’t see and report on how its improved over time

My concern is that they are at risk of becoming an IT shop

So how did this build vs. buy story end? Unsurprisingly it’s still a “WIP”.

The CIO was determined to take a shot at the self-build and we were able to help him by licensing our data model as a building block. This gives him a running start by solving the core problem right out of the gates – modeling complex relationships among data. They loaded GoldenSource’s flexible schema onto a SQL server instance and already state that we saved them a few years of intellectual cycles.

They’ll cross the bridge for external connectors in a few months and again we can help with our pre-built connectors and integration accelerators if it becomes too much for them to take on. My concern is that they are at risk of becoming an IT shop, when actually they’re already a pretty awesome Hedge Fund.

Contact us to learn more

All Posts