Low-Code Development: Leverage low and no code to streamline your workflow so that you can focus on higher priorities.
DZone Security Research: Tell us your top security strategies in 2024, influence our research, and enter for a chance to win $!
In our Culture and Methodologies category, dive into Agile, career development, team management, and methodologies such as Waterfall, Lean, and Kanban. Whether you're looking for tips on how to integrate Scrum theory into your team's Agile practices or you need help prepping for your next interview, our resources can help set you up for success.
The Agile methodology is a project management approach that breaks larger projects into several phases. It is a process of planning, executing, and evaluating with stakeholders. Our resources provide information on processes and tools, documentation, customer collaboration, and adjustments to make when planning meetings.
There are several paths to starting a career in software development, including the more non-traditional routes that are now more accessible than ever. Whether you're interested in front-end, back-end, or full-stack development, we offer more than 10,000 resources that can help you grow your current career or *develop* a new one.
Agile, Waterfall, and Lean are just a few of the project-centric methodologies for software development that you'll find in this Zone. Whether your team is focused on goals like achieving greater speed, having well-defined project scopes, or using fewer resources, the approach you adopt will offer clear guidelines to help structure your team's work. In this Zone, you'll find resources on user stories, implementation examples, and more to help you decide which methodology is the best fit and apply it in your development practices.
Development team management involves a combination of technical leadership, project management, and the ability to grow and nurture a team. These skills have never been more important, especially with the rise of remote work both across industries and around the world. The ability to delegate decision-making is key to team engagement. Review our inventory of tutorials, interviews, and first-hand accounts of improving the team dynamic.
Kubernetes in the Enterprise
Kubernetes: it’s everywhere. To fully capture or articulate the prevalence and far-reaching impacts of this monumental platform is no small task — from its initial aims to manage and orchestrate containers to the more nuanced techniques to scale deployments, leverage data and AI/ML capabilities, and manage observability and performance — it’s no wonder we, DZone, research and cover the Kubernetes ecosystem at great lengths each year.In our 2023 Kubernetes in the Enterprise Trend Report, we further dive into Kubernetes over the last year, its core usages as well as emerging trends (and challenges), and what these all mean for our developer and tech community. Featured in this report are actionable observations from our original research, expert content written by members of the DZone Community, and other helpful resources to help you go forth in your organizations, projects, and repos with deeper knowledge of and skills for using Kubernetes.
Demystifying Agile Development Methodologies: Scrum vs. Kanban
Agile project management is all about breaking down complex tasks into manageable pieces and accurately estimating their effort. Two key techniques in this process are story point estimation and story splitting. Understanding how these two practices intersect can significantly boost your team's productivity and project outcomes. Let's look into the relationship between story point estimation and story splitting and demonstrate how your Agile workflows can benefit from both. What Is Story Point Estimation? A fundamental concept in Agile project management is story point estimation. It is a technique for estimating the amount of work, complexity, and risk involved in finishing a user story. Instead of using hours or days, teams use story points to maintain a relative sizing approach. So, why story point? They help teams focus on the effort rather than the time it might take to complete a task. This method accounts for uncertainties and variations in productivity, making it more adaptable to different scenarios. How Do Story Points Work? Teams assign a numerical value to each user story. These values are often based on the Fibonacci sequence (1, 2, 3, 5, 8, 13, etc.) or the T-shirt sizes, which reflects the idea that larger numbers or sizes should represent exponentially more effort. Here's a quick breakdown: Fibonacci Sequence T-Shirt Sizes Details 1 point XS A very simple task with minimal complexity 2-3 points S Slightly more complex tasks but still manageable within a short period 5-8 points M Tasks that require more effort, likely involve multiple aspects and potential risks 13 points and above L and above Highly complex tasks that might need to be split into smaller, more manageable pieces The team can more efficiently plan their sprints, prioritize tasks, and spot potential bottlenecks by assigning story points. Story points give a clearer picture of the workload and help in making informed decisions about task assignments and deadlines. What Is Story Splitting? Story splitting is another essential technique in Agile project management. It's all about breaking down large, complex user stories into smaller, more manageable pieces. This practice not only makes the workload more approachable but also ensures that each piece can be completed within a single sprint. Why Split Stories You might wonder why we need to split stories at all. The main reasons include enhanced manageability, increased focus, and better alignment with sprint goals. Smaller stories are easier to track and complete, making planning and execution more straightforward. They allow teams to focus on specific tasks, leading to higher-quality outcomes and consistent value delivery. When To Split Stories Not all stories need splitting, but certain signs indicate when it might be necessary. If a story is too large to be completed within a single sprint, has multiple acceptance criteria, or if the requirements are vague, it's a good candidate for splitting. Effective methods for story splitting include dividing by workflow, business rules, or data variations. For instance, a feature requiring design, development, and testing can be split into three separate stories. Similarly, a payment system could be split into stories for credit card payments, PayPal payments, and so on. By splitting the story, the team can tackle each part step-by-step, making progress visible and manageable. How Story Point Estimation Can Help in Story Splitting Story point estimation and story splitting are like two sides of the same coin, working together to streamline Agile project management. Teams may efficiently select when and how to split stories by using story points to identify overly complicated or large stories. This ensures that each element is manageable and deliverable within a sprint. Identifying Complex Stories Story points help teams gauge the complexity and effort required for each user story. When a story receives a high point value, it's a signal that the story might be too large or complex to handle in one go. This is where story splitting comes in handy. By breaking down a high-point story, the team can transform it into smaller, more digestible pieces. Techniques for Splitting Stories Using story points to guide splitting can be quite straightforward. For example, if a story is assigned 13 points, the team can look at the tasks involved and split them based on different criteria such as workflow stages, business rules, or data variations. Imagine a project involving a new user registration feature. If this story is estimated at 13 points, the team might split it into parts like designing the registration form (2 points), implementing the front-end (3 points), creating the back-end logic (5 points), and setting up email verification (3 points). This approach breaks down the complexity and makes each task more manageable. How Story Splitting Can Help Story Point Estimation Story splitting doesn't just make tasks more manageable; it also plays a crucial role in refining story point estimation. By breaking down complex stories into smaller, clearer tasks, teams can enhance the accuracy of their estimations, leading to better planning and execution. Simplifying Estimation When stories are too large or complex, estimating their effort can be challenging and often inaccurate. Splitting these stories into smaller parts simplifies the estimation process. Each smaller story is more straightforward to understand, making it easier for the team to assign accurate story points. Improving Accuracy Smaller stories come with more specific requirements and less ambiguity. This clarity allows the team to make more precise estimations. For example, a large story like "Implement user authentication" might be vague and hard to estimate accurately. By splitting it into smaller stories such as "Design login UI," "Develop front-end login functionality," and "Set up back-end authentication," each part becomes easier to evaluate and estimate accurately. Real-World Application Let's say a team is tasked with developing a feature for generating sales reports in an application. Initially, the story might seem daunting, and estimations could range wildly. By splitting the story into smaller tasks—such as creating the report UI, implementing data fetching, and adding filtering options—the team can provide more accurate story point estimates for each part. This not only improves the reliability of the estimates but also makes the planning process smoother and more predictable. Final Words Story splitting and story point estimation work well together in Agile project management. Accurately estimating story points helps teams identify complex tasks that need to be broken down, making them manageable within a sprint. On the other hand, breaking up stories into more manageable, well-defined tasks improves the precision of story point estimates, which results in more effective planning and execution. Adopting these techniques can transform your Agile processes, making your team more efficient and your projects more predictable.
From the day I wrote my first Hello World program, it took me two years to land a job at Amazon and another two years to get into Google. That’s because I accomplished this without having a Computer Science degree or attending a boot camp. I made countless mistakes along the way which made my path to becoming a Software Engineer longer than it should have been. I spent countless hours watching YouTube tutorials and paid for numerous Udemy courses, only to find that they added no real value. If I could go back in time and undo all the things that didn't work, I would be in the exact same situation as today within six months of starting programming. That’s exactly why I am writing this helpful piece. Today, I'll cut out all the unnecessary fluff and provide you with the quickest route from beginner to full-time Software Engineer. Avoiding Common Mistakes Most Programmers Make Before I begin, there are three major mistakes that can slow down your progress to become a full-time Software Engineer. I will also share these three mistakes along the way, so stay tuned for that. Choosing the Right Programming Language As a new programmer, your first decision is, "Which programming language should I learn?" To help you answer that, let's discuss what beginners typically look for in a programming language. Number one, the language should be easy and intuitive to write. It should not require learning very complex syntax and should be as close as possible to writing in English. Next, the programming language should be versatile and have many applications. As a beginner, you don’t want to learn a new language for every new project you want to build. In other words, the language should have great returns for the time you invest in learning it. Lastly, the programming language should be fast to write. You shouldn’t have to waste time spelling out the declaration of a new variable or simple iteration through a list. In other words, it should be concise and get the job done in a minimum number of lines of code. As some of you might have already guessed, Python is the language that solves all these problems. It’s almost as easy as writing in English. It has so many different applications like web development, data science, and automation. Python is extremely fast to write when compared with other popular languages because it requires fewer lines of code for the same amount of functionality. As an example, here are the same codes written in Java vs. Python. You can see that Python consists of a few lines while JavaScript contains many lines and long code. JavaScript 1. const fs = require('fs'); 2. const path = require('path'); 3. 4. const directoryPath = path.join(__dirname, '.'); 5. const filePath = path.join(directoryPath, 'Code.txt'); 6. 7. fs.readFile(filePath, 'utf-8', (err, data) => { 8. if (err) { 9. console.error(err); 10. return; 11. } 12. 13. const lines = data.split('\n'); 14. let emptyLineCount = 0; 15. 16. lines.forEach(line => { 17. if (line.trim() === '') { 18. emptyLineCount++; 19. } 20. }); 21. 22. console.log('Number of empty lines:', emptyLineCount); 23. }); Python 1. my_file = open("/home/xiaoran/Desktop/test.txt") 2. 3. print(my_file.read()) 4. 5. my_file.close() Effective Learning Methods Now that we know we should learn Python, let’s talk about how to do it. And this is where most new programmers make the first major mistake that slows them down. The mistake most beginners make is that they learn by watching others code. Let me explain this by telling you how most people learn programming. Most newbies would go to a course provider like Udemy and look up Python courses. Then they pick one of these 20+ hour courses thinking that these courses are long and detailed and hence good for them. And then they never end up finishing the course. That’s because 20 hours of content is not the same as 20 hours of great content. Right Way To Learn Code Some people will go to YouTube and watch someone else code without ever writing any code themselves. Watching these tutorials gives them a false sense of progress. That’s because coding in your head is very different from actually writing down the code and debugging the errors. So, what is the right way to do it? The answer is very simple: you should learn by coding. For this, you can go to this free website called learnpython.org. On this website, just focus on the basic lessons for Python and don’t worry about data science tutorials or any advanced tutorials. That's because even if you learn advanced concepts right now, you will not be able to remember them until you have actually applied them to a real-world problem. You can always come back to learn the advanced concepts in the future when you need them for your projects. If you look at a lesson, each lesson first explains a basic concept and then asks you to apply those concepts to a problem. Feel free to play with the sample code. Think about other problems you can solve with the concepts you just learned and try to solve them in the exercise portion. Once you’re done with the basics, you’re good to move on to the next steps. Building Projects In the spirit of learning by coding, we would do some projects in Python next. In the beginning, it’s very hard to do something on your own, so we’ll take the help of experts. Watch the video below on 12 beginner Python projects. In this video, they build 12 beginner Python projects from scratch. These projects include building Madlibs, Tic Tac Toe, Minesweeper, etc., and all of them are very interesting. They walk you through the implementation of all these projects step by step, making it very easy to follow. But before you start watching this tutorial, there are two things you should know. Setting up Your IDE Number one, you should not watch this tutorial casually. Follow along if you really want to learn programming and become a Software Engineer. To follow along, you would need something called an Integrated Development Environment (IDE) to build these projects. An IDE, in simplest terms, is an application where you can write and run your code. There are several popular IDEs for Python. This tutorial uses VS Code IDE, so you might want to download VS Code and set it up for Python before starting on this tutorial. Once you have completed this tutorial, you are ready to work on your own projects. Developing Your Own Projects Working on building your own projects will help you in multiple ways. Number one, it will introduce you to how Software Engineers work in the real world. You will write code that will fail, and you’ll debug it and repeat the process over and over again. This is exactly what a day in the life of a Software Engineer looks like. Number two, you will build a portfolio of projects by doing this. You can host your code on GitHub and put the link in your resume. This will help you attract recruiters and get your resume shortlisted. Number three, building your own projects will give you confidence that you are ready to tackle new challenges as a Software Engineer. But what kind of projects should you work on? You can think of any projects that you find interesting, but here are some examples I found. You can build a web crawler, an alarm clock, an app that gives you Wikipedia articles of the day, or you can make online calculators. Some example projects are that you can also build a spam filter, an algorithmic trading engine, and an e-commerce website. Preparing for Job Applications Now you have a great resume, and you are confident about your programming skills. Let’s start applying for Software Engineer positions. Wait a second. This is actually the second major mistake new programmers make. You see, in an ideal world, having good programming skills and a great resume is all you should need to become a Software Engineer. But unfortunately for us, tech companies like to play games with us in the interviews. They ask you specific kinds of programming questions in the interviews. If you don’t prepare for these questions, you might not get the expected results. Essential Course: Data Structures and Algorithms So, let’s see how to prepare for interviews. All the interviews are based on this one course that is taught to all Computer Science graduates. This course is called Data Structures and Algorithms. Fortunately for us, Google has created this course and made it available for free on Udacity. And the best part is that this course is taught in Python. In this three-month course, you’ll learn about different algorithms related to searching and sorting. You’ll learn about data structures like maps, trees, and graphs. Don’t worry if you don’t know any of these terms right now. I am sure that by the end of this course, you’ll be a pro. For that, just keep two things in mind. Number One, be regular and finish this course. As I mentioned earlier, most people start courses and never finish them. So, make sure you take small steps every day and make regular progress. Number Two, make sure you complete all the exercises they give in this course. As I have already said many times, the only way to learn coding is by coding. So, try to implement the algorithms on your own and complete all the assignments. Trust me when I say this: when it comes to interviewing for entry-level jobs, this course is the only difference between you and someone who dropped more than a hundred thousand dollars on a computer science degree. So, if you finish this course, you’ll be pretty much on par with someone who has a CS degree when you interview. Interview Preparation After completing this course on Data Structures and Algorithms, you'll have all the foundational knowledge needed to tackle interviews. To further sharpen your skills, practice with questions previously asked by tech companies. For that, you should use a website called Leetcode.com. On Leetcode, you will get interview-style questions. You can write your code and test your solution directly on the website. Leetcode is great for beginners because all the questions are tagged as easy, medium, or hard based on difficulty level. If you buy a premium subscription to the website, you can also filter the questions by the tech company that asked them in past interviews. You should start with easy questions and keep working on them until you can solve them in 45 minutes. Once that happens, you can move on to medium questions. When you start solving mediums in 45 minutes, you can start applying for Software Engineering jobs. If you are lucky, you will get the job right away. For most people, it will be a process full of disappointment and rejection. Handling Rejections And this is where they make the third and the biggest mistake of all—they quit. The main reason people give up early is because they overthink and complicate the interview process. After every rejection, they replay the interview over and over in their head to figure out why they failed and take every rejection personally. To avoid this, stay inside your circle of control and try to influence the outcome of your interviews but never get tangled in the things you can’t control. In other words, do your best to crack the interviews but try to be detached from the outcome of the interviews.
TL;DR: Product Owner and Scrum Master? Combining the roles of Product Owner and Scrum Master in one individual is a contentious topic in the Agile community. A recent LinkedIn poll (see below) revealed that 54% of respondents consider this unification useless, while 30% might accept it in rare moments. This blog post explores the implications of merging these roles, emphasizing the importance of distinct responsibilities and the potential pitfalls of combining them. We also consider exceptions where this approach might be temporarily justified and analyze the insightful comments from industry professionals. The LinkedIn Poll: Could the Product Owner and Scrum Master Be the Same Individual? On May 23, 2024, I asked a simple question: Could the Product Owner and Scrum Master be the same individual? Or is mixing roles disadvantageous? Agile puts a lot of emphasis on focus. How come then that so often practitioners are asked — or expected — to cover for two roles simultaneously? Penny-pinching or smart move from a holistic perspective? Referring to the comments, the majority strongly opposes combining the Product Owner and Scrum Master roles, citing significant differences in responsibilities and the need for checks and balances. Conditional acceptance is noted mainly in startup contexts with resource constraints. Some are open to exceptions but remain cautious about long-term viability. Personal experiences highlight the challenges and potential conflicts, while flexible approaches are suggested for specific contexts. We can identify five categories among the comments: 1. Strict Opposition: Fundamental Differences in Roles The Product Owner and Scrum Master roles have distinct responsibilities, requiring full-time attention and unique skill sets. Combining them can lead to neglect and conflict of interest and undermine the healthy tension that balances product goals with team capacities. The roles act as checks and balances, ensuring ambitious goals and realistic execution. 2. Conditional Acceptance: Resource Constraints in Startups In resource-limited situations, such as startups, combining roles may be necessary due to budget constraints. However, this should be a temporary solution until the organization can afford to separate the roles. 3. Skeptical But Open to Exceptions: Specific Contexts and Temporary Solutions While generally inadvisable, combining roles might be feasible in exceptional circumstances, such as during temporary absences or in small teams, provided there is clear role differentiation and support. 4. Experiential Insights: Personal Experience Individuals with personal experience managing both roles or observing this practice often find it problematic due to inherent conflicts of interest and the heavy workload. 5. Pragmatic and Flexible Approaches: Practical Solutions Some suggest rotating the Scrum Master role among team members or having a Developer take on the role to balance responsibilities. Understanding Agile principles and maintaining flexibility in role management can help mitigate potential issues. Ten Reasons Why Combining Product Owner and Scrum Master Roles Is Not a Good Idea What other reasons might there be to question the idea of unifying Product Owner and Scrum Master roles? Let’s have a look: 1. Conflict of Interest Combining the roles of Product Owner (PO) and Scrum Master (SM) creates a conflict of interest. The PO maximizes the product’s value, often requiring prioritization and tough trade-offs. The SM ensures Scrum practices are followed, fostering a healthy team environment. Combining these roles compromises both priorities, reducing objectivity and effectiveness. 2. Loss of Focus Each role demands full attention to be effective. The PO must stay engaged with stakeholders, market trends, and the Product Backlog while creating alignment with their teammates. Simultaneously, the SM needs to focus on coaching the team, removing impediments, and supporting changes at the organizational level to improve the team’s environment. Combining roles can dilute focus, leading to suboptimal performance in both areas. 3. Compromised Accountability Scrum thrives on clear accountabilities. The PO is accountable for the Product Backlog and value delivery, while the SM is accountable for the Scrum process and team health. Merging these roles blurs the accountability a Scrum team’s success is based on. 4. Reduced Checks and Balances Scrum’s design includes built-in checks and balances where the PO focuses on improving value creation, while the SM ensures sustainable pace and quality. Combining the roles removes this tension, potentially leading to burnout or technical debt due to a lack of restraint on delivery pressures. 5. Increased Risk of Micromanagement Combining roles can lead to micromanagement, as the individual may struggle to switch between facilitation and decision-making. This can undermine the team’s self-management, reducing creativity and innovation. 6. Decreased Team Support The SM role involves supporting the team by removing impediments and ensuring a healthy work environment. A combined role may prioritize product issues over team issues, reducing the support the team receives and impacting morale and productivity. 7. Impaired Decision Making The PO must make decisions quickly to adapt to market changes, while the SM needs to foster team accord and gradual improvement. Combining these roles can slow decision-making processes and create confusion within the team regarding priorities. 8. Diluted Expertise Both roles require specific skills and expertise. A PO needs strong business acumen, while an SM needs a deep understanding of agile practices and team dynamics. Combining the roles often means one skill set will dominate, leaving gaps in the other area. 9. Impeded Transparency The Scrum framework relies on transparency to inspect and adapt effectively. A single person handling both roles may unintentionally hide issues or conflicts to maintain the appearance of progress, thus impairing the team’s ability to improve continuously. 10. Undermined Scrum Values Combining roles can undermine the Scrum values of focus, openness, respect, commitment, and courage, as the individual may struggle to balance conflicting responsibilities and provide the necessary support for the team to embody these values effectively. Consequently, by separating the roles of Product Owner and Scrum Master, organizations ensure clear accountability, maintain checks and balances, and foster a healthier, more productive Scrum environment. Additional Considerations What else do we need to consider? Five issues come to mind: 1. Role Synergy vs. Role Conflict While it’s tempting to think that combining roles might streamline processes and communication, each role has distinct and sometimes conflicting responsibilities. Consider whether the short-term gains of combining roles might be outweighed by long-term inefficiencies and conflicts. 2. Impact on Team Dynamics Consider how the combination of roles might affect team dynamics. A single person wielding both roles could inadvertently create a hierarchical dynamic, undermining the flat structure that Scrum promotes and potentially leading to reduced team morale and engagement. 3. Sustainability and Burnout The workload for both roles can be intense. Combining them can lead to burnout for the individual trying to manage both responsibilities. Think about how this might affect their ability to perform effectively over time and the potential impacts on team stability and productivity. 4. Training and Development Reflect on the development paths for team members. Combining roles might hinder individuals’ ability to specialize and grow in their respective areas. It might be more beneficial to invest in strong, separate training programs for Product Owners and Scrum Masters to ensure they can excel in their distinct roles. 5. Adaptability To Change Agile practices, including Scrum, thrive on adaptability. Combining roles might reduce the team’s ability to quickly adapt to changes, as the dual-role individual could be overloaded and less responsive to necessary pivots in product development or team facilitation. Three Exceptions Where Combining the Product Owner and Scrum Master Roles Might Be Justified By now, we have a solid understanding that under usual circumstances, it is not a good idea to combine the Product Owner and the Scrum Master roles. However, under which circumstance might it be acceptable? Let’s delve into the following: 1. Small Startups or Early-Stage Companies Context In the early stages of a startup, resources are often limited. The team might be small, focusing on rapid development and iteration to find product-market fit. Justification Combining the roles can help streamline decision-making processes and reduce overhead. The person in the dual role can quickly pivot and make changes without waiting for coordination between separate roles. Considerations This should be temporary until the startup grows and can afford to hire separate individuals for each role. As the company scales, the complexity and workload will likely necessitate separating the roles to maintain effectiveness and prevent burnout. 2. Temporary Absence or Transition Period Context If the organization is undergoing a transition, such as the departure of a Scrum Master or Product Owner, it might be necessary to combine roles temporarily to ensure continuity. Justification Having a single individual temporarily fill both roles can provide stability and maintain the momentum of ongoing projects. It ensures that the Scrum events continue to be facilitated and that Product Backlog management does not lapse. Considerations During this period, the organization should actively search for a replacement to fill the vacant role. Additionally, the individual in the dual role should receive support to manage their workload, such as delegating non-critical tasks to team members. 3. Highly Experienced Agile Practitioner Context In situations where an organization has an individual with extensive experience and a deep understanding of both Scrum and the product domain, they might be capable of effectively handling both roles. Justification An experienced agile practitioner might have the skills and knowledge to temporarily balance the demands of both roles, especially in crisis situations where their expertise is crucial to navigating complex challenges. Considerations This should be a short-term solution even with a highly skilled individual. The organization should closely monitor the impact on the team and the individual’s workload. Continuous feedback from the Scrum team and stakeholders is essential to ensure that combining roles does not negatively affect productivity and morale. Additional Guidance Clear communication: In any of these scenarios, it is crucial to maintain clear communication with the team about the temporary nature of the combined role and the reasons behind it. This transparency helps manage expectations, and fosters trust within the team. Monitoring and support: Regular check-ins are necessary to assess the individual’s well-being and effectiveness in managing both roles. Providing additional support, such as temporary assistance or redistributing some responsibilities, can help mitigate the risk of burnout. Plan for transition: Have a clear plan for transitioning back to separate roles as soon as feasible. This includes setting criteria for when the transition will occur, such as reaching a specific team size in a startup or hiring a new team member during a transition period. By considering these exceptions and managing them thoughtfully, organizations can navigate periods where combining the Product Owner and Scrum Master roles might be justified while minimizing potential drawbacks. Food for Thought By thoroughly considering the following aspects, you can make a more informed decision about whether combining the Product Owner and Scrum Master roles is the right move for your organization: Experimentation and Feedback If the idea of combining roles persists, consider running it as a time-boxed experiment. Gather feedback from the team and stakeholders before making a permanent change. This can provide insights into the practical implications and help you make a more informed decision. Cultural Fit Assess whether this change aligns with your organization’s culture and values. Scrum and Agile practices often challenge traditional hierarchies and thrive in a culture of collaboration and continuous improvement. Ensure that any role changes support rather than hinder these cultural elements. Long-Term Vision Keep the long-term vision in mind. Decisions made today should support the organization’s goals and values in the future. Consider how role clarity and adherence to Scrum principles will impact your team’s ability to deliver value continuously. Conclusion While combining the Product Owner and Scrum Master roles might seem efficient in specific contexts, it generally poses significant risks to the effectiveness of Scrum teams. These roles’ distinct responsibilities, necessary skills, and built-in checks and balances are crucial for fostering a productive and balanced environment where Scrum teams can thrive. Although there are rare situations, such as in resource-constrained startups or temporary transitions, where merging these roles might be justified, these should only be temporary solutions with straightforward plans for separation. The insights from the LinkedIn poll and comments highlight the importance of maintaining role clarity to ensure sustainable team performance and alignment with Agile principles.
This article discusses the skill set that is expected by various companies for the roles of SREs. I have worked as a Site Reliability Engineer for companies such as Amazon, Microsoft Corporation, and TikTok. I have attended numerous interviews for Site Reliability Engineering roles and have interviewed other engineers for SRE roles in the companies where I worked. The role of Site Reliability Engineer can have different titles in various companies. For example, Google calls this role Site Reliability Engineering, Microsoft used to call this role Service Engineering, Amazon calls it Systems Development Engineer, Meta calls it Production Engineering, and a few other companies call this role DevOps. These roles have many common requirements. Let's look into various skills that companies, especially the big technology companies, look for while interviewing engineers for these roles. Coding One of the important skills that SREs need to have is coding since automating repetitive tasks and writing tools to manage infrastructure efficiently is an important part of the SRE job. Companies test the candidate's coding skills through coding interviews. Usually, these interviews tend to be of two types. The first type of coding interview focuses on standing data structures and algorithms. Coding challenges from websites like leetcode or hackerrank will help practicing coding for this type of interview. The second type of coding interview focuses on coding challenges that may emulate some of the day-to-day tasks SREs work on. For example, reading data from files and processing the data, etc. Companies are usually open to candidates using any programming language but, based on my experience, coding in Python would be helpful since it is easy to implement solutions in Python and the majority of SREs use Python for day-to-day automation. System Design The second important skill that an SRE needs to have is a solid understanding of large-scale distributed systems. Companies look for this knowledge by asking System Design questions during the interviews. An example question for a system design interview is "Design a logging service." These questions tend to be vague and it is important to ask a lot of clarifying questions before coming up with a design solution. A few key things to focus on as an SRE while designing a system are Scalability, Reliability, and Security of the system. It is also important to focus on Non Abstract parts of the systems such as capacity planning. Operating Systems A deep understanding of Operating Systems, especially Linux, is an important skill that will be invaluable for an SRE. Companies look for this knowledge through the interviews focused on the Linux operating system. The questions may include various topics such as popular Linux commands to administer and troubleshoot issues on Linux, Linux Kernel, System Calls, troubleshooting performance issues on Linux, and Memory/Network/Disk/Process sub-systems of Linux. Computer Networking A good understanding of various protocols and TCP/IP models is a great skill to have for an SRE as this will help in troubleshooting any production issues or designing infrastructure. A few protocols that are important to have a deeper understanding of are HTTP, TLS, DNS, TCP, UDP, IPv4, IPv6, ARP, ICMP, etc. It is also useful to know which tools can be used to analyze each of these protocols. SRE Best Practices Companies often look for candidates who understand the SRE best practices related to topics such as observability (alerts, metrics, logs, traces, dashboards, etc.), incident management, change management, automation, operational excellence, and capacity planning. The topics may also include concepts such as SLI/SLA/SLO, MTTR/MTTA/MTTI, etc. Work Experience This category includes questions related to the kind of projects that you have worked on in your current and previous jobs. Interviewers typically ask for a specific project that the candidate worked on in the past and dive deep in to understand various aspects such as the complexity of the project, challenges faced during the project and how the candidate overcame those challenges, and what the candidate learned from any failures from the projects. Infrastructure A key responsibility of SREs is to design, deploy, and maintain various infrastructure components such as Kubernetes, SQL databases, non-SQL databases, message queues, load balancers, Content Delivery Networks, etc. Knowledge and experience working on various major cloud services such as Amazon Web Services(AWS), Microsoft Azure and Google Cloud Platform(GCP) is another important aspect that companies look for in the candidate. Depending on the role where the position is in, companies may assess the engineer's understanding of one or more of these infrastructure components. Troubleshooting Being part of the on-call rotation is an essential part of an SRE's job. Effective troubleshooting skills are important to have since resolving user-impacting issues under time pressure is critical for maintaining the uptime of the services. SREs combine their knowledge of various technologies, and systems and their experience operating services in production to troubleshoot issues. Companies assess troubleshooting skills by asking how the engineer would solve a given hypothetical issue. Approaching the troubleshooting problem methodically and showing the understanding of distributed systems is important in this type of interview. Behavioral Every company has its unique culture, values, and leadership principles. The behavioral interviews focus on asking questions to probe whether the engineer matches the company's culture. These questions tend to focus on how the engineer acted in the past in a similar situation. An example question is "Tell me a scenario when you had to disagree with your manager." A popular method to use to answer such questions is the STAR method. STAR refers to Situation, Task, Action, and Result. Conclusion Site Reliability Engineering role is a challenging role where one needs to have a deeper understanding of various technologies. By focusing on these key skills one can become a great Site Reliability Engineer crack challenging technical interviews and have a rewarding career. Happy interviewing!
In the world of software development, Agile and DevOps have gained popularity for their focus on efficiency, collaboration, and delivering high-quality products. Although they have different goals, Agile and DevOps are often used interchangeably. This article seeks to illuminate the distinctions and commonalities, between these approaches, demonstrating how they synergize seamlessly to produce results. Figure courtesy of Browser Stack Understanding Agile Overview Agile is a project management and software development methodology that emphasizes an approach, to delivering projects. Emerging from the Agile Manifesto in the 2000s. Agile focuses on working with customers adjusting plans as needed, striving for ongoing enhancements, and making small changes gradually instead of large-scale launches. Key Principles Agile is founded on four principles: Teams value. Communication, above adherence to procedures or tools. Priority is given to creating software, over documentation. Customer engagement and feedback are promoted during the development phase. Adapting to evolving needs is favored over sticking to a predetermined plan. Top Agile Frameworks There are frameworks that have been created based on the following principles: In Scrum, tasks are often broken down into sprints lasting around 2 to 4 weeks with check-ins and evaluations. Kanban on the other hand employs a Kanban board to manage progress and review tasks. Extreme Programming (XP): This technique uses practices such as test driven development, continuous integration, and pair programming to improve software quality. Understanding DevOps Overview DevOps, short, for Development and Operations encompasses practices, cultural values, and tools that promote teamwork, between software development (Dev) and IT operations (Ops). The primary goal of DevOps is to shorten the development cycle, boost deployment frequency, and guarantee the delivery of top-notch software. Key Principles DevOps is driven by the following principles: Fostering teamwork and joint effort is the key to promoting a sense of shared duty, between development and operations teams. By embracing Continuous Integration and Continuous Delivery you can make sure that any changes, to the code are thoroughly tested, integrated, and deployed into environments seamlessly. Emphasizing real-time monitoring, logging, and feedback mechanisms for timely issue identification and resolution. Key Practices in DevOps DevOps revolves around the following core practices: Managing infrastructure configurations through code to automate the setup and control of infrastructure resources is known as Infrastructure, as Code. Continuous Integration involves integrating code changes into a repository, with automated builds and tests to detect any problems quickly. Continuous Delivery builds on CI by automating the deployment process for releasing code changes to production. Automated Testing involves incorporating automated tests at each development phase to uphold code quality and functionality. Comparison Between DevOps and Agile To distinguish between Agile and DevOps it is beneficial to compare them across aspects. Here is a comparison chart summarizing the elements of Agile and DevOps: Objective Agile DevOps Focus Software development and project management Software development and IT operations Primary Goal Delivering small, incremental changes frequently Shortening development lifecycle, improving deployment frequency Core Principles Customer collaboration, adaptive planning, continuous improvement Collaboration, automation, CI/CD, monitoring Team Structure Cross-functional development teams Integrated Dev and Ops teams Frameworks Scrum, Kanban, XP CI/CD, IaC (infrastructure as code), Automated Testing Feedback Loop Iterative feedback from customers Continuous feedback from monitoring and logging Automation Limited focus on automation Extensive automation for builds, tests, and deployments Documentation Lightweight, as needed Comprehensive, includes infrastructure as code Cultural Philosophy Agile mindset and values DevOps culture of collaboration and shared responsibility Implementation Scope Primarily within development teams Across development and operations teams Difference Between Agile and DevOps Agile and DevOps both have objectives in enhancing software delivery and quality. They diverge in various aspects: Scope and Emphasis Agile: Centers, on refining the software development process and project management. It stresses development, customer engagement, and flexibility to accommodate changes. DevOps: Goes beyond development to encompass IT operations striving to enhance the software delivery cycle. DevOps methodologies prioritize collaboration, between development and operations automation and continuous integration and delivery. Team Setup Agile methodology involves teams comprising developers, testers, and business analysts working closely together. While each team member may have roles they collaborate harmoniously to achieve shared objectives. In contrast, DevOps advocates for integrated teams where both development and operations professionals collaborate seamlessly throughout the software delivery lifecycle. This collaborative approach helps break down barriers between teams. Encourages a culture of responsibility. Automation Practices Under practice, tools are used to support development activities; however, the emphasis on automation is not as pronounced as in DevOps. Agile teams may automate tasks like testing but primarily focus on iterative development and customer feedback. DevOps places emphasis on automation as a tenet. By automating build processes testing procedures and deployment tasks DevOps aims to enhance efficiency minimize errors and facilitate delivery. Feedback Channels Agile relies, on receiving feedback from customers and stakeholders through sprint reviews and retrospectives to drive enhancements. DevOps underscores the importance of feedback obtained from monitoring systems and logging mechanisms. DevOps teams leverage real-time data to swiftly identify and address issues ensuring optimal software performance, in production settings. Cultural Philosophy Agile philosophy: Centers on the core values and mindset of Agile, which prioritize collaboration with customers, adaptability, and continuous enhancement. It fosters a culture of flexibility and responsiveness to changes. DevOps culture: Focuses on nurturing an environment of shared responsibility and ongoing learning between development and operations teams. The goal of DevOps is to establish a setting where all team members collaborate towards objectives. Similarities Between Agile and DevOps Despite their variances, Agile and DevOps exhibit resemblances that complement each other: Emphasis on collaboration: Both Agile and DevOps stress the significance of collaboration among team members. Agile encourages functional teamwork while DevOps supports merging development with operations to enhance communication and break down barriers. Continuous enhancement: Both methodologies prioritize processes for improvement. Agile concentrates on delivering changes based on customer feedback while DevOps highlights integration/delivery for rapid enhancements driven by real-time monitoring feedback. Customer-focused approach: Both Agile and DevOps place emphasis, on delivering value to customers. Agile methodologies prioritize working closely with customers. Gathering feedback to ensure that the final product meets user requirements. On the one hand, DevOps practices focus on delivering top-notch software and consistently enhancing the overall customer experience. Embracing change and adaptability: Both Agile and DevOps emphasize the importance of being adaptable, in the development process. Agile encourages teams to be responsive to evolving needs. Adjust their strategies accordingly. Similarly, DevOps empowers teams to swiftly address issues. Make necessary tweaks to enhance performance and reliability. The Verdict? In software development, both Agile and DevOps play huge roles in offering distinct advantages and catering to different aspects of the software delivery lifecycle. While Agile concentrates on refining development processes and project management through practices centered around customer needs DevOps extends these principles by incorporating IT operations stressing more on collaboration, automation, and continuous deployment. When To Use Agile Agile is ideal for projects where: Requirements are expected to change frequently Customer feedback is crucial to the development process The project involves a high degree of complexity and uncertainty Teams need a flexible, iterative approach to manage work When To Use DevOps DevOps is suitable for organizations that: Require frequent, reliable software releases Have a need to improve collaboration between development and operations teams Aim to reduce time to market and enhance deployment frequency Want to implement extensive automation in their build, test, and deployment processes Combining Agile and DevOps Seek collaboration, between development and operations teams. To speed up the time it takes to bring products to market and increase how often they're deployed organizations are aiming for automation, in their building, testing, and deployment procedures. By merging Agile and DevOps methods companies can gain advantages. Agile principles are used for project management and development practices while DevOps practices handle deployment and operations. This combination allows teams to have an effective and top-notch software delivery process. It lets organizations adapt swiftly to changing needs provide value to customers and uphold performance levels in production environments. Conclusion Agile and DevOps are both methodologies that have transformed the software development field. Understanding their distinctions, similarities, and how they work together is vital, for organizations seeking to optimize their software delivery procedures. Capitalizing on the strengths of both Agile and DevOps teams can foster a culture of teamwork ongoing enhancement and customer focus—ultimately delivering top-quality software that meets user expectations. Let me know in the comments which one you use in your company.
Innovation and increased productivity play crucial roles in software development. One method to achieve this goal is applying the Specification-First approach, which structures and manages the development process. This article explores the concept of Specification-First, its significance for development teams, and the advantages it brings in testing and integration. Specification-First is a software development methodology based on the principle that product requirements specification should be developed and approved before the active coding phase begins. This enables the establishment of clear project goals and parameters from the outset, fostering a more structured and predictable development process. This methodology helps to avoid misunderstandings between clients and developers and minimizes the risks of requirement changes in later stages of development. Who Is This Article For? This article is intended for project managers and team leaders in software development. It is useful for them because it provides insights into the Specification-First approach, which can enhance the efficiency of the development process and improve the quality of software. By understanding and implementing this approach, managers can improve team communication, reduce development time, increase client satisfaction, and facilitate parallel work among teams, ultimately leading to faster product development and achieving desired results. What Is Specification-First? Specification-First is an approach to software development in which the specification of an API or service is created and approved before the actual development begins. This means that the development team first defines how the application interface should look, which endpoints (methods) should be available, what data should be transmitted, and in what manner. Why Is The Specification-First Approach Important? Proactive Development Process Management Specification-First enables the team to clearly understand what they need to create even before they start coding. This reduces the likelihood of misunderstandings and discrepancies between customer expectations and the actual outcome. Improved Communication Creating an API specification encourages developers, clients, and other stakeholders to discuss and refine requirements. This leads to a better understanding of the project and accelerates the development process. Easy Integration and Testing One of the main advantages of Specification-First is the ability to easily start integration and testing even before the code is ready. With an API specification, mock services can be set up, and automated tests can be created, speeding up the development process and ensuring higher code quality. Benefits of Automated Quality Assurance 1. Earlier Test Development Since the API specification is created before development begins, the AQA department can start writing tests in advance based on the methods already described in the specification. This significantly reduces the time required to develop the test suite and increases its completeness and accuracy. For example, with a clear specification in hand, the AQA department can begin developing test scenarios even during the planning stage, optimizing the testing process and reducing time spent in the future. 2. Increased Efficiency Testing according to a predefined specification simplifies the process and enhances the AQA department's work efficiency. With clear and concise requirements outlined in the specification, testing specialists can focus on verifying specific functional capabilities and requirements rather than spending time identifying discrepancies in the interface or ambiguities in the requirements. For instance, having a detailed specification helps AQA engineers quickly determine which tests to conduct to verify specific functionality, significantly reducing the time spent on test scenario development and execution. Integration Benefits Having a specification in software development is crucial for efficient integration with other teams for several important reasons. Here's why: Clarity and Alignment A specification defines clear project goals and parameters from the outset. This ensures that all teams involved have a unified understanding of what needs to be developed and how different components will interact. A shared specification allows teams to align their efforts more effectively towards achieving common objectives. Minimizing Misunderstandings Specifications help to avoid misunderstandings between teams, clients, and stakeholders. By documenting requirements comprehensively upfront, the risk of misinterpretation or miscommunication during the integration phase is significantly reduced. This leads to smoother collaboration and integration across teams. Faster Issue Resolution When teams work with a well-defined specification, any issues or questions that arise during integration can be addressed more quickly and decisively. The specification serves as a reference point to troubleshoot problems, identify root causes, and implement solutions efficiently. Accelerated Development Process With a specification in place, integration tasks can commence even before the entire system is fully developed. Teams can start integrating their components based on the agreed interfaces and behaviors specified in the document. This parallel work streamlines the development process and accelerates overall project timelines. Enhanced Quality Assurance Specifications facilitate easier and more comprehensive testing. Test scenarios can be developed based on the expected behavior defined in the specification, allowing quality assurance teams to validate functionalities early on. This leads to higher-quality software with fewer defects and issues. Improved Stakeholder Satisfaction Having a specification-driven approach often results in better outcomes that align closely with stakeholder expectations. By adhering to the documented requirements, development teams can deliver products that meet or exceed client needs, leading to higher satisfaction. Specification API specifications may be maintained utilizing a range of OAS3 tools, particularly in the context of backend development. These platforms offer efficient methods for creating, managing, and documenting API specifications, ensuring availability to the entire development and Quality Assurance (QA) team. OAS3 OAS3 refers to the OpenAPI Specification 3, which is a standard for describing web services in a machine-readable format. This standard is used to document and define API functionalities. OAS3 is a specification presented in JSON or YAML format, detailing request and response structures, data schemas, parameters, paths, and other API specifics. Key features of the OpenAPI Specification 3 (OAS3) include: API description: OAS3 allows for the description of your API's structure, including available endpoints (paths), supported methods (GET, POST, PUT, DELETE, etc.), request parameters, headers, and bodies. Data schemas: OAS3 enables the definition of data schemas for API requests and responses, providing a clear specification of the data format used in the API. Validation and documentation: The OAS3 specification can be used to automatically validate requests and responses and generate API documentation that is easily readable by humans and machines. OAS3 is a powerful tool for standardizing API descriptions and simplifying web service development, testing, and integration. Let’s illustrate with an example: YAML openapi: 3.0.0 info: title: Sample API version: 1.0.0 paths: /users: get: summary: Returns a list of users. responses: '200': description: A list of users. content: application/json: schema: type: array items: type: object properties: id: type: integer name: type: string Apicurio Apicurio facilitates the creation, modification, and administration of API specifications (definitions of software interfaces). It empowers users to develop new specifications, modify existing ones, manage versions, generate documentation, and integrate with various development tools. Apicurio streamlines the lifecycle management of API specifications, enhancing their precision and accessibility for stakeholders. Apicurio is a utility designed to craft, refine, and oversee API specifications. This tool empowers developers and teams to author and document API specifications using an intuitive and user-friendly interface. For instance: JSON { "openapi": "3.0.0", "info": { "title": "Sample API", "version": "1.0.0" }, "paths": { "/users": { "get": { "summary": "Returns a list of users.", "responses": { "200": { "description": "A list of users.", "content": { "application/json": { "schema": { "type": "array", "items": { "type": "object", "properties": { "id": { "type": "integer" }, "name": { "type": "string" } } } } } } } } } } } } gRPC Ultimately, we can use gRPC as a specification, describing methods, services, and objects. gRPC (gRPC Remote Procedure Call) is a tool for defining, designing, and deploying remote services that use Remote Procedure Call (RPC) protocols. gRPC utilizes a simple interface for defining services and structured data for their exchange, which can then be used to generate client and server code in various programming languages. A use case for gRPC in a backend team might look as follows: Suppose you have a backend team developing microservices for your application. You can use gRPC to define the interfaces of these microservices in the form of an RPC protocol, which describes what methods are available, what parameters they take, and what data they return. This interface can be defined using the Interface Definition Language (IDL) Protobuf (Protocol Buffers), which is part of the standard gRPC specification. After defining the interface, you can generate code in your team's programming language for the client and server sides of the microservices. This allows the team to quickly create clients and servers that can communicate with each other using the generated code. Thus, using gRPC as a specification for the backend team enables standardization of data exchange between microservices, simplifies development, and ensures the high performance of your application. Example: ProtoBuf syntax = "proto3"; service UserService { rpc GetUser(UserRequest) returns (UserResponse); } message UserRequest { string user_id = 1; } message UserResponse { string name = 1; int32 age = 2; } Auto-Generation To generate code based on the specification, you can utilize various tools and libraries specifically designed for this purpose. Here are several popular methods of code generation: 1. Using Code Generators Many tools, such as OpenAPI Generator, or gRPC Tools, provide code generators that automatically create client and server code based on the API specification. Specify your specification in the appropriate format, select the programming language and the type of code you want to generate, and the tool will do the rest. 2. Using IDE Plugins Some integrated development environments (IDEs), such as IntelliJ IDEA, Visual Studio Code, or Eclipse, offer plugins that allow you to generate code based on the API specification directly from the development environment. This is typically done through the IDE context menu or special commands. 3. Using Scripts and Command-Line Utilities You can use scripts and command-line utilities to configure and automate the code generation process more flexible. The choice of a specific method depends on your project's preferences and requirements. The most suitable code generation method will depend on the type of API specification, the technologies used, and the tools preferred by your development team. Conclusion Implementing the Specification-First principle in development teams is a crucial step toward improving the efficiency of software development processes. This approach fosters a more structured and transparent development process, enhances quality, and accelerates time to market. To successfully transition to Specification-First, the following steps should be considered: 1. Selecting the Right Tool Choosing a tool for creating and storing API specifications plays a significant role. The choice affects the ease of working with APIs and the accessibility and clarity of specifications for the entire team. 2. Gradual Integration and Adaptation It's best to implement the new approach gradually, starting with individual projects or modules. This allows the team to become familiar with the new methodologies and tools, learn best practices, and optimize the process. 3. Consideration of Authentication and Security API specifications may also include information about authentication methods, authorization, and other security aspects. This ensures the security of the developed applications from the outset and helps avoid issues in the future. 4. Team Training and Preparation Transitioning to a new approach requires understanding and support from the entire team. Training team members on the fundamentals of Specification-First, its advantages, and implementation methodologies is the first step toward successful adoption. Once the team successfully adopts the Specification-First principle in one project, it can expand this approach to all subsequent projects and teams. Over time, Specification-First will become part of the corporate culture and a standard approach to software development within the organization. Transitioning to Specification-First optimizes processes within the team and contributes to achieving higher quality standards and customer satisfaction.
A (long) time ago, my first job consisted of implementing workflows using the Staffware engine. In short, a workflow comprises tasks; an automated task delegates to code, while a manual task requires somebody to do something and mark it as done. Then, it proceeds to the next task — or tasks. Here's a sample workflow: The above diagram uses the Business Process Model and Notation. You can now design your workflow using BPMN and run it with compatible workflow engines. Time has passed. Staffware is now part of Tibco. I didn't use workflow engines in later jobs. Years ago, I started to automate my conference submission process. I documented it in parallel. Since then, I changed the infrastructure on which I run the software. This post takes you through the journey of how I leveraged this change and updated the software accordingly, showcasing the evolution of my approach. Generalities I started on Heroku with the free plan, which no longer exists. I found that the idea was pretty brilliant at the time. The offering was based on dynos, something akin to containers. You could have a single one for free; when it was not used for some time, the platform switched it off, and it would spin a new one again when receiving an HTTP request. I believe it was one of the earliest serverless offerings. In addition, I developed a Spring Boot application with Kotlin based on the Camunda platform. Camunda is a workflow engine. One of the key advantages of workflow engines is their ability to store the state of a particular instance, providing a comprehensive view of the process. For example, in the above diagram, the first task, labeled "Request Purchase," would store the requester's identity and the requested item (or service) references. The Purchase Department can examine the details of the requested item in the task after. The usual storage approach is to rely on a database. The Initial Design At the time, Heroku didn't provide free storage dyno. However, I had to design my initial workflow around this limitation, which posed its own set of challenges. I couldn't store anything permanently, so every run had to be self-contained. My fallback option was to run in memory with the help of H2. Here is my initial workflow in all its glory: As a reminder, everything starts from Trello. When I move a card from one lane to another, Trello sends a request to a previously registered webhook. As you can expect, the hook is part of my app and starts the above workflow. The first task is the most important one: it evaluates the end state from the event payload of the webhook request. The assumption is that the start state is always Backlog. Because of the lack of storage, I designed the workflow to execute and finish in one run. The evaluation task stores the end state as a BPMN variable for later consumption. After the second task extracts the conference from the Trello webhook payload, the flow evaluates the variable: it forwards the flow to the state-related subprocess depending on its value. Two things happened with time: Salesforce bought Heroku and canceled its free plan. At the same time, Scaleway offered their own free plan for startups. Their Serverless Container is similar to Heroku's - nodes start when the app receives a request. I decided to migrate from Heroku to Scaleway. You can read about my first evaluation of Scaleway. I migrated from H2 to the free Cockroach Cloud plan Refactoring to a New Design With persistent storage, I could think about the problems of my existing workflow. First, the only transition available was from the Backlog to another list, i.e., Abandoned, Refused, or Accepted. The thing is, I wanted to account for additional less-common transitions; for example, the talk was accepted but could be later abandoned for different reasons. With the in-place design, I would have to compute the transition, not only the target list. Next, I created tasks to extract data. It was not only unnecessary, it was bad design. Finally, I used subprocesses for grouping. While not an issue per se, the semantics was wrong. With persistent storage, we can pause a process instance after a task and resume the process later. For this, we rely on messages in BPMN parlance. A task can flow to a message event. When the task finishes, the process waits until it receives the message. When it happens, the flow process resumes. If you can send different message types, an event-based gateway helps forward the flow to the correct next step. Yet, the devil lurks in the details: any instance can receive the message, but only one is relevant — the one of the Trello card. Camunda to the rescue: we can send a business key, i.e., the Trello card ID, along with the message. Note that if the engine finds no matching instance, it creates a new one. Messages can trigger start events as well as regular ones. Here's my workflow design: For example, imagine a Trello hook that translates to an Abandoned message. If there's no instance associated with the card, the engine creates a new instance and sends the Abandoned message, which: Start with the flow located at the lower left Ticks the due date on the Trello card Finishes the flow If it finds an existing instance, it looks at its current state: it can be either Submitted or Accepted. Depending on the state, it continues the flow. Conclusion In this post, I explained how I first limited my usage of BPMN and then unlocked its true power when I benefited from persistent storage. However, I didn't move from one to the other in one step. My history involves around more than twenty versions. While Camunda keeps older versions by design, I didn't bother with my code. When I move them around, it will fail when handling cards that were already beyond Backlog. Code needs to account for different versions of existing process instances for regular projects. I'm okay with some manual steps until every card previously created is done. To Go Further Business Process Model and Notation Camunda My evaluation of the Scaleway Cloud provider
Data science isn't just a trend; it's a transformative force revolutionizing industries and creating a wealth of career opportunities. This comprehensive overview delves into the world of data science careers, their future outlook, and the essential skills for success in this dynamic field. Defining Data Science Data science is the interdisciplinary practice of extracting knowledge and actionable insights from structured and unstructured data. It leverages a combination of tools, algorithms, machine learning, and statistical methods to analyze and interpret complex data sets. Data scientists uncover hidden patterns, correlations, and trends that empower informed decision-making, optimize processes, and drive innovation. The Importance of Data Science In our data-centric world, organizations across all sectors generate vast amounts of data. When analyzed effectively, this data can reveal invaluable insights that can reshape business operations, enhance customer experiences, and propel strategic growth. Data science plays a crucial role in enabling organizations to harness the power of their data for a competitive advantage. Diverse Career Paths in Data Science A diagram showcasing the various career paths in data science, including Data Scientist, Data Analyst, Data Engineer, Machine Learning Engineer, and Business Intelligence Analyst. The field of data science offers a wide array of career options, each with distinct responsibilities and focus areas: Data Professional Roles Role Description Skills Data Scientist Data scientists are the architects of data-driven solutions. They design and implement sophisticated models, algorithms, and data pipelines to address complex business challenges. Strong analytical skills, programming expertise (Python, R), deep understanding of machine learning. Data Analyst Data analysts are the storytellers of data. They collect, process, and analyze data to reveal meaningful patterns and trends. They often utilize visualization tools to present their findings to stakeholders in a clear and actionable format. Data collection, data processing, data analysis, visualization tools. Data Engineer Data engineers are the builders of data infrastructure. They design, construct, and maintain the systems and pipelines responsible for collecting, storing, and processing massive volumes of data. Big data technologies (Hadoop, Spark), cloud platforms (AWS, Azure), database management. Machine Learning Engineer Machine learning engineers develop and deploy machine learning models that can learn from data and make predictions or decisions. Machine learning algorithms, software engineering, model deployment. Business Intelligence Analyst Business intelligence analysts leverage data to glean insights into business performance, customer behavior, and market trends. BI tools (Tableau, Power BI), data analysis, reporting. Essential Skills for Data Science Careers To thrive in data science, a blend of technical and soft skills is crucial: Technical Skills Programming (Python, R) Data manipulation and analysis (SQL, Pandas) Machine learning algorithms (regression, classification, clustering) Data visualization (Tableau, Power BI) Big data technologies (Hadoop, Spark) Soft Skills Critical thinking and problem-solving Effective communication and presentation skills Business acumen Collaborative teamwork Curiosity and a passion for learning The Promising Future of Data Science The future of data science is incredibly bright. As organizations continue to amass more data, the demand for skilled data professionals will surge. Emerging technologies like artificial intelligence (AI), the Internet of Things (IoT), and blockchain will further propel the growth of data science. Data scientists will be instrumental in developing AI-powered applications, analyzing IoT data to optimize processes, and ensuring the security and integrity of blockchain networks. Embarking on a Data Science Career If you're captivated by the possibilities of data science, here's a path to get started: Education: Data Science Education and Career Development Data Science Education and Career Development Option Description Education Consider pursuing a formal degree in data science, computer science, statistics, or a related field. Online Courses and Bootcamps Explore online courses or bootcamps to gain practical experience with data science tools and techniques. Build a Portfolio Undertake personal projects or contribute to open-source initiatives to demonstrate your skills. Network Attend industry events and conferences to connect with professionals in the field. In conclusion, data science is far more than a passing trend. It's a driving force behind innovation, offering a diverse array of career paths for those with the right skills and passion. As technology continues to advance and data becomes even more integral to decision-making, the demand for skilled data scientists will only intensify. Whether you're drawn to the analytical rigor of a data scientist, the storytelling prowess of a data analyst, or the infrastructure expertise of a data engineer, the world of data science is ripe with opportunities for those who are eager to learn, adapt, and make a meaningful impact on the future. With the right preparation and a dedication to continuous learning, a career in data science can be both intellectually rewarding and financially lucrative.
1. Use "&&" to Link Two or More Commands Use “&&” to link two or more commands when you want the previous command to be succeeded before the next command. If you use “;” then it would still run the next command after “;” even if the command before “;” failed. So you would have to wait and run each command one by one. However, using "&&" ensures that the next command will only run if the preceding command finishes successfully. This allows you to add commands without waiting, move on to the next task, and check later. If the last command ran, it indicates that all previous commands ran successfully. Example: Shell ls /path/to/file.txt && cp /path/to/file.txt /backup/ The above example ensures that the previous command runs successfully and that the file "file.txt" exists. If the file doesn't exist, the second command after "&&" won't run and won't attempt to copy it. 2. Use “grep” With -A and -B Options One common use of the "grep" command is to identify specific errors from log files. However, using it with the -A and -B options provides additional context within a single command, and it displays lines after and before the searched text, which enhances visibility into related content. Example: Shell % grep -A 2 "java.io.IOException" logfile.txt java.io.IOException: Permission denied (open /path/to/file.txt) at java.io.FileOutputStream.<init>(FileOutputStream.java:53) at com.pkg.TestClass.writeFile(TestClass.java:258) Using grep with -A here will also show 2 lines after the “java.io.IOException” was found from the logfile.txt. Similarly, Shell grep "Ramesh" -B 3 rank-file.txt Name: John Wright, Rank: 23 Name: David Ross, Rank: 45 Name: Peter Taylor, Rank: 68 Name Ramesh Kumar, Rank: 36 Here, grep with -B option will also show 3 lines before the “Ramesh” was found from the rank-file.txt 3. Use “>” to Create an Empty File Just write > and then the filename to create an empty file with the name provided after > Example: Shell >my-file.txt It will create an empty file with "my-file.txt" name in the current directory. 4. Use “rsync” for Backups "rsync" is a useful command for regular backups as it saves time by transferring only the differences between the source and destination. This feature is especially beneficial when creating backups over a network. Example: Shell rsync -avz /path/to/source_directory/ user@remotehost:/path/to/destination_directory/ 5. Use Tab Completion Using tab completion as a habit is faster than manually selecting filenames and pressing Enter. Typing the initial letters of filenames and utilizing Tab completion streamlines the process and is more efficient. 6. Use “man” Pages Instead of reaching the web to find the usage of a command, a quick way would be to use the “man” command to find out the manual of that command. This approach not only saves time but also ensures accuracy, as command options can vary based on the installed version. By accessing the manual directly, you get precise details tailored to your existing version. Example: Shell man ps It will get the manual page for the “ps” command 7. Create Scripts For repetitive tasks, create small shell scripts that chain commands and perform actions based on conditions. This saves time and reduces risks in complex operations. Conclusion In conclusion, becoming familiar with these Linux commands and tips can significantly boost productivity and streamline workflow on the command line. By using techniques like command chaining, context-aware searching, efficient file management, and automation through scripts, users can save time, reduce errors, and optimize their Linux experience.
Marty Cagan describes the job of the Product Manager as “to discover a product that is valuable, usable, and feasible." Finding the balance between the business, users, and technology demands a diverse skill set. There are many things that are going on simultaneously that require attention. In this regard, Jira is great. Sure, it has some downsides, but the tool can help Product Managers to: Keep the product strategy aligned. Clearly prioritize tasks while keeping them structured and organized. Analyze the performance of your team. Use a Roadmap To Keep Your Strategy Aligned As powerful as Jira is in the right hands, it is not a solution for everything. For instance, it is probably not the best tool for developing a product roadmap. However, it is quite good for managing one. What this means is that Jira has great functionality for managing roadmaps in a quick, actionable, and transparent way. Nevertheless, it requires proper input: You need to break down your scope into Epics and tasks before you start building a roadmap in Jira. We typically use a framework called BRIDGeS for multi-context analysis of a project. This framework leaves us with prioritized, ready-to-use Epics and tasks at the end of the session. Given this article is not about roadmaps per se, I would rather not go into too much detail. I will be focusing on Jira instead. Setting Up a Timeline in Jira Once you have your work broken down into Epics and tasks, creating a roadmap – or, as Jira calls it, a Timeline – is quite simple. Navigate to your board. Select the “Timeline” option from the menu on the right. Click on the “+ Create Epic” to add an Epic. Add child issues by clicking on the “+” sign next to the Epic. Click on the timeline to set the timeframe for the Epic. Tips and Tricks for Using Jira’s Timeline Feature Unlike most Jira features, the Timeline is rather intuitive and friendly to new users. Still, there are certain easily missable tips and tricks that can make your job much simpler. It’s just that you need to know where to look Add Dependencies You can add Dependencies between Epics from the Timeline. Simply hover over the timeline for an Epic and you will see two dots – one at the top right corner and one in the bottom left corner. Click and drag them to link one Epic to another. This is useful for understanding the order of work or visualizing potential blockers. Note: The color of the connective thread will change to red if the dates of Epics overlap. This feature is quite handy for easily seeing if certain dependencies are becoming blockers. Still, I’d recommend using dependencies wisely, otherwise the roadmap will become confusing because of the intertwined Epics. Use Different Colors for Epics You can right-click on the timeframe to easily change the color of an Epic or to remove start and end dates. Color-coding your Epics is a useful element of visualization. View Settings You can adjust the settings of the timeline if you wish to filter out certain completed issues or expand/collapse all of the Epics at the same time. Another useful option you can find in the view settings is the progress bar. Enable it to see a bar indicating the progress of an Epic. Filter out Epics With a Certain Status You can use the status category filter to hide the Epics and tasks that are marked as done from the timeline. This simple filter greatly improves the visibility of the roadmaps for times when you need to review done/in progress/future scope. Prioritize and Manage Tasks in the Backlog Now that we have an actionable plan, let’s take a look at how Jira can be used to execute it. Setting Up a Backlog in a Kanban Project In my experience, most Agile teams prefer to use a Scrum board that has the backlog feature enabled by default. That being said, a Kanban board needs a little bit of tweaking if you want to have a separate backlog rather than storing all of your issues on the board. The task of adding a backlog is slightly simpler for Team-Managed projects. Simply select the Add View option from the left side panel and enable the backlog. The process of adding the backlog in a Company-Managed project is a bit trickier. Go to the three dots menu at the top right corner of your board. Select Board settings. Select the Columns option. Drag the backlog status card from the board and into the Kanban backlog column. Delete the original Backlog column by clicking on the trash bin icon. Going back to the board, you’ll see that it has only three columns left, and the backlog has been moved to the side panel. Hint: This approach has an added benefit. Creating issues from the Backlog screen is much simpler and faster than from the board. Just click on the + Create Issue button and type in the name of your task. You can keep on typing and hitting enter to add new issues. And you can change their type as well. Setting Up a Backlog (Or Several) in a Scrum Project As I mentioned earlier, the Scrum project comes with the backlog feature enabled by default. That said, there is a major difference between the backlogs in Scrum and Kanban Jira projects: a Scrum Project has two backlogs by default. One is the Project Backlog and the other is the Sprint Backlog. The Sprint Backlog consists of a set of user stories or tasks that the development team commits to completing within a specific sprint or time-boxed iteration. It is a subset of the product backlog and represents the work sprint planning for that particular sprint. The Product Backlog contains a prioritized list of all the desired features, enhancements, and bug fixes for the product. It represents the complete scope of work that needs to be done over multiple sprints. Hint: The backlog view in Jira allows you to create several Sprints. These Sprints can be used as separate backlogs for certain specific tasks. For example, you can use these Sprints as separate backlogs for Bugs, Support Requests, the Icebox, etc. This functionality is super handy for keeping your work well-organized. Plus, this approach allows you to keep your work well-organized. The tasks from these backlogs can be pulled into the Sprint Backlog during the Sprint Planning Session. Story Points As a feature, Story Points are used to estimate the complexity of a user story. Typically, we use the following approach when it comes to assigning points to user stories: Point Description 1 One-liner change. You know what should be changed. Very easy to test. 2 You are aware of what to do. Changes are bigger than one-liner~1-2 days to implement. May include regression testing 3 Bigger scope. May require some research/documentation reading/codebase research. Includes unknown parts. 5 Biggest story. Not enough to split. 8 Must be split. Do research first. Bonus Tip: Backlog Refinement Backlog refinement is the process of reviewing, prioritizing, and tidying up the backlog. It is a necessary activity as, over time, people will add a lot of tasks that are missing context. For now, let’s focus on the benefits of tidying up your tasks: The team is working on the tasks that are adding real value to the product. The tasks are optimized and broken down in a way that a single issue doesn’t take longer than an entire Sprint. The work that is in progress reflects the roadmap. How do we do it? We typically refine the backlog once every two weeks. We take the stories from the product backlog and place them into relevant Sprint containers like Bugs, Technical Debt, or upcoming Sprint. We review the estimation and priority of the tasks that are being moved from the Product Backlog. Analyze the Performance of Your Team With the Built-In Reports Jira has a variety of reporting tools that are available to Product Managers. They are easily accessible from the reports tab on the right-side menu. Note: The Reports tab may not be enabled for you by default. Therefore, please follow these steps in case you do not see it: Select the Add View option. Select the More Features option. Find the Reports option and toggle it on. These reports can be used to analyze the performance of your team. They are also easily shareable and exportable. There is a wide selection of reports, but using all of them isn’t necessary. Here is a brief overview of several reports that we find to be the most useful: Burndown chart: Tracks the remaining story points in Jira and predicts the likelihood of completing the Sprint goal. Burnup chart: Tracks project progress over time and compares the work that is planned to the work that has been completed to date. Sprint report: Analyzes the work done during a Sprint. It is used to point out either overcommitment or scope creep in a Jira project. Velocity chart: This is a kind of bird’s eye view report that shows historical data of work completed from Sprint to Sprint. This chart is a nice tool for predicting how much work your team can reliably deliver based on previously burned Jira story points. Conclusion There are many new, slick work management tools on the market. Most are probably better than Jira in terms of UI and UX. That being said, as one of the oldest solutions out there, Jira has had the time and resources to develop a wide selection of features. This is why many PMs feel lost and confused when they are experiencing Jira for the first time. Don’t worry though – we’ve all been there. That’s why this little guide exists, showing you the different options of tools that will work best for you. Consider this to be your starting point in the endless sea of Jira features.