Position: #1573/Sr.Data Architect
Location: Chicago, IL
Domain: IT Software
Level: Senior Level (10+ years)
Key Components of the role
- Architect robust data solutions to ingest, catalog and analyze high-volume, high-frequency data in real time to generate business insights.
- Design and build optimal data engineering processes and frameworks considering best practices around efficiency, data integrity, scalability and maintainability. ?
- Create optimized workflows and design specification documents to help define data platform features and SLAs. ?
- Develop rapid prototypes and proofs of concept to help assess strategic opportunities and future data product capabilities. ?
- Experience working with API and ingestion of external data.
- Using sound scientific guiding principles in analysis, model training, testing and validation of the data models to create precise, high performing and reliable models to be used in product
- Using software principles to write functional, scalable, tested and clean deployable code, during the implementation stage of algorithms
- Help solve challenging problems with problem formulation, prioritization, exploration and implementation
- Collaborate and work closely with Engineering, Product and Design to create high quality reliable products
- Responsible for the management and maintenance of databases, reports, and portals
- Create/develop/establish standardized best practice reporting, and take on responsibility for report updates, both weekly and monthly
- Develop and deploy tracking and measuring tools to allow for identification of areas of opportunity
- 5+ years of full-time experience in IT related fields
- Minimum of Bachelor degree preferably in MIS or equivalent IT field experience required.
- 3+ years data engineering experience supporting high-volume, high-velocity data streams.
- Strong background in architecting relational databases like Postgres and NoSQL database like Mongo DB or Cassandra (Preferred)
- Ability to write SQL queries and use tools such as Hadoop, Tableau (nice to have), and other data reporting tools. Experience in transactional and data warehouse environments using MySQL, Hive, or other database systems. Must deeply understand joins, subqueries, window functions, etc.
- Ability to use containers like dockers or kubernettes (nice to have)
- Experience working in a team of programmers as data scientists
- Experience designing and architecting data solution in AWS, Azure or Google Cloud (Preferred)
- Ability to architect a data lake is a plus
- Awareness of automated testing and continuous integration processes related to data engineering
- Hands-on knowledge of scripting languages ( Shell scripts) on Linux Platforms to perform basic tasks?
- Strong Machine Learning and Statistics track record and expertise on Regression Methods, Classification Methods, Clustering, Neural Networks, Unsupervised Learning Methods, Etc. (nice to have)
- Published apps, websites or other examples of solutions built
- Effective team player with minimal supervision and effectively meet project deadlines in an agile environment
- Able to effectively navigate white space ambiguity and possess creative problem solving
- A desire to be accountable for owning problems from design to implementation
- An ability to evangelize data models to developers and analysts
- Flexibility to adapt to multiple standards based on the use case and technology
- A bias for action and pragmatic solutions
- A low ego and humility; an ability to gain trust through communication and doing what you say you will do
- Excellent verbal and visual communicator
Pactera is an equal opportunity employer with a diverse and multicultural mix of local and overseas staff. Personal data provided by job applicants will be used strictly in accordance with our personal data policy and for recruitment purposes only.