DevOps Engineer Senior

Job ID
# of Openings
Job Location(s)
Posted Date
1 month ago(6/24/2020 8:45 AM)
Information Technology


Medical Science & Computing (MSC), a Dovel company, is an exciting growth oriented company, dedicated to providing mission critical scientific and technical services to the Federal Government. We have a distinguished history of supporting the National Institutes of Health (NIH) and other government agencies. MSC offers a dynamic and upbeat work environment, excellent benefits and career growth opportunities.


We attract the best people in the business with our competitive benefits package that includes medical, dental and vision coverage, 401k plan with employer contribution, paid holidays, vacation, Medical and Flexible Spending Accounts, Pre-Tax Transit Assistance and tuition reimbursement. If you enjoy being a part of a high performing, professional service and technology focused organization, please apply today! 


NCBI is part of the National Library of Medicine (NLM) at the National Institutes of Health (NIH).  NCBI is the world’s premier biomedical center hosting over six million daily users that seek research, clinical, genetic, and other information that directly impacts biomedical research and public health – at NCBI you can literally help to accelerate cures for diseases! NCBI’s wide range of applications, platforms (node, python, Django, C++, you name it) and environments (big data [petabytes], machine learning, multiple clouds) serve more users than almost any other US Government Agency according to


MSC is searching for a DevOps Engineer to support the National Center for Biotechnology Information (NCBI), part of the U.S. National Library of Medicine, National Institutes of Health. This opportunity is full-time, and it is on-site in Bethesda, MD


DevOps facilitates software development and deployment by utilizing automation and providing efficient and convenient solutions for challenges of scaling development efforts across teams, languages, and cloud environments.


Our DevOps team:

  • Streamlines multi-language software development by providing tools and templates to solve common challenges of Continuous Integration
    • We support C++, Python and Scala, and also work with Go and Rust
  • Develops a modern continuous deployment platform with cutting-edge technologies (containers, cluster schedulers, service mesh, dynamic secrets provisioning)
  • Minimizes toil by automating configuration via codification (Terraform is one of our main instruments)
  • Works with product development teams to help them adopt the platform and best practices (we maintain some of the "core libraries" product teams rely on)
  • Performs research and evaluates new technologies to meet the most advanced demands of our product teams (e.g., delivering large, versioned data sets to applications running on Kubernetes)
  • Maintains a high level of education for ourselves and our customers (training courses on-site and off-site, conference attendance and tuition reimbursement)
  • Practices Agile development (Scrum) and continuous improvement 

At its foundation, our deployment platform stands on:

  • TeamCity for Continuous Integration (while also considering Jenkins and GitLab)
  • Artifactory for storing libraries developed internally, as well as container images (ECR and GCR experiments in cloud environments are planned)
  • Nomad and Kubernetes to schedule and run our deployments (currently transitioning from Nomad to Kubernetes)
  • Telegraf, InfluxDB, Kapacitor and Grafana along with OpsGenie for monitoring and alerting (recently set up Influx Enterprise cluster)
  • Consul and Linkerd for service discovery (we heavily contributed to Consul support in Linkerd in its early days)
  • Vault for secrets management (looking to scale it massively)
  • Puppet and Terraform for configuration management (we develop custom Terraform plugins for our specific needs)
  • AWS, GCP and on-premises datacenters (we strive to provide a seamless application deployment experience between data centers and clouds)


Duties & Responsibilities

• Analyze requirements presented by Product Owner and design sustainable solutions to advance deployment of platform functionality
• Manage cloud infrastructure as code
• Develop software to facilitate Continuous Integration and Continuous Deployment
• Troubleshoot performance and scalability issues in products and infrastructure
• Mentor junior team members (or be a mentee)


  • Solid programming knowledge of: Python, Java (version 8 or above), or C++; and desire to learn new languages
  • Five plus years' of experience
  • Hands-on Linux experience. System programming expertise or understanding of how container runtimes work is a big plus
  • Experience with AWS, GCP, Azure, or other cloud service providers
  • Experience using cluster scheduler technologies (Kubernetes, Nomad, Mesos), or solid understanding of the concepts they operate upon
  • Understanding of distributed systems design principles (we will ask you about consensus, and we don't mean blockchain)
  • Customer-focused, team-oriented disposition
  • Interpersonal communications skills, to interface with customers, peers and management
  • Integrity and responsibility

Educational Requirements

  • B.S. in a STEM field (Engineering, Computer Science, Mathematics, Physics)
  • Alternatively, equivalent industry experience in Software Development

Bonus Points

  • Strong presentation skills
  • Experience mentoring other developers
  • Experience working with HashiCorp products
  • Experience setting up or using monitoring systems (Grafana, TICK Stack, Prometheus)
  • Experience managing stateful datasets in cloud environments
  • Any other DevOps technologies, any prior DevOps experience



Medical Science & Computing (MSC), a Dovel company, is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or veteran status.


Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
Share on your newsfeed