Most of my recent work experience has been in large multi-tier application design and implementation. This has included my implementations of custom database connection pooling, object storage, thread pooling, transparent and explicit caching, a custom database abstraction language, and an application-specific authentication, authorization, and mandatory access-control mechanism, among many other things.
My operations background has left me much more sensitive to deployment details that many other engineers seem to overlook. I feel that deployment details including monitoring, debugging, and granular configuration are as fundamental to large systems as implementation details.
UNIX administration under BSDI, FreeBSD, Linux, NetBSD, SunOS, Solaris, OSF/1 (Digital Unix), and Irix. Including, but not limited to, installation and/or configuration and maintenance of the following network servers: ftpd, innd, httpd (Apache, Weblogic, NCSA, Netscape, thttpd, jetty, resin, and my own), gated, zebra, named, Netscape mail, NIS, NFS, sendmail, postfix, cyrus, ssh, qmail, smail, tcpd (wrappers), uucp, ipf, ipnat, Kerberos v5, Netscape Directory Server, OpenLDAP, AFS, etc... Certificate management using OpenSSL. Kame IPSEC and IPv6.
Installation and configuration of cisco (IOS 9.x-12.x) routers, PIX firewall, and cisco LocalDirector. Also installation and configuration of other miscellaneous network hardware including F5 BigIP, Alteon switches, Morning Star routers and Livingston Portmaster and Ascend terminal servers, etc...
SRE supporting F1 and building various software to help our small team support a growing list of demanding customers.
I write lots of go and python code as well as using a bunch of internal secret stuff nobody knows about.
Built the core engine of membase and lots of surrounding tools.
[need to write more] Job processing, development environments, hg → git, search stuff, feed stuff, etc..
Replaced a build and deployment system made up of custom ant scripts and an embedded servlet container in favor of a maven-driven standard full-project build deployable in any standard servlet container. As part of this project, increased the test coverage from zero to as close to 100% as possible without introducing dramatic changes to the way the code worked (mostly limitations of DB access methodology). This included building new test technology to test code that was otherwise untestable.
In a forward-looking architecture, I created new data access mechanisms and an abstract query language allowing somewhat complex queries to be issued without regard to access technologies making it possible to issue queries over objects stored in LDAP, JDBC, in-memory structures, hibernate and potentially more using the same query while maximizing code reuse and consistency.
I also developed a set of UI widgets that formed the foundation of a lot of our information display using Google Web Toolkit and later zk. All information displays are realtime using a GC-safe pubsub mechanism with very granular subscriptions. The change tracking is built using aspectj to record the previous and new value for every property change on every entity automatically.
Designed and implemented network management software to scale from under 200,000 transactions per-day averaging around 6 seconds to transaction to a pretty steady transaction time of just under a second (mostly network time) moving up to about 500,000 transactions on the same hardware. The software scales horizontally which has allowed us to scale to roughly 3,000,000 transactions per day by expanding the cluster to about 24 machines (although the same load was tested on a single machine). While scaling, the software continued to increase in complexity by adding support for additional management protocols (TR-069 and SNMP), additional databases (Oracle, Postgresql), and additional access interfaces (Web 2.0 UI, SOAP, XML-RPC).
Built an abstract TR-069 interface based on dynamically reconfigurable state machines allowing for database-driven workflows configurable per transaction type per group via a drag-and-drop browser-based interface. On top of this is an API that presents a set of simple request/response methods that synchronize multiple asynchronous events that may be occurring on multiple machines within a cluster. Synchronization and data transfer occurs using a custom multicast messaging system that allows short-lived temporary subscribers to receive compressed objects to be delivered in multiple segments which may arrive in any arbitrary order.