Low-Code Development: Leverage low and no code to streamline your workflow so that you can focus on higher priorities.
DZone Security Research: Tell us your top security strategies in 2024, influence our research, and enter for a chance to win $!
There are several paths to starting a career in software development, including the more non-traditional routes that are now more accessible than ever. Whether you're interested in front-end, back-end, or full-stack development, we offer more than 10,000 resources that can help you grow your current career or *develop* a new one.
Cracking the SRE Interview
How To Submit a Technical Presentation
1. Use "&&" to Link Two or More Commands Use “&&” to link two or more commands when you want the previous command to be succeeded before the next command. If you use “;” then it would still run the next command after “;” even if the command before “;” failed. So you would have to wait and run each command one by one. However, using "&&" ensures that the next command will only run if the preceding command finishes successfully. This allows you to add commands without waiting, move on to the next task, and check later. If the last command ran, it indicates that all previous commands ran successfully. Example: Shell ls /path/to/file.txt && cp /path/to/file.txt /backup/ The above example ensures that the previous command runs successfully and that the file "file.txt" exists. If the file doesn't exist, the second command after "&&" won't run and won't attempt to copy it. 2. Use “grep” With -A and -B Options One common use of the "grep" command is to identify specific errors from log files. However, using it with the -A and -B options provides additional context within a single command, and it displays lines after and before the searched text, which enhances visibility into related content. Example: Shell % grep -A 2 "java.io.IOException" logfile.txt java.io.IOException: Permission denied (open /path/to/file.txt) at java.io.FileOutputStream.<init>(FileOutputStream.java:53) at com.pkg.TestClass.writeFile(TestClass.java:258) Using grep with -A here will also show 2 lines after the “java.io.IOException” was found from the logfile.txt. Similarly, Shell grep "Ramesh" -B 3 rank-file.txt Name: John Wright, Rank: 23 Name: David Ross, Rank: 45 Name: Peter Taylor, Rank: 68 Name Ramesh Kumar, Rank: 36 Here, grep with -B option will also show 3 lines before the “Ramesh” was found from the rank-file.txt 3. Use “>” to Create an Empty File Just write > and then the filename to create an empty file with the name provided after > Example: Shell >my-file.txt It will create an empty file with "my-file.txt" name in the current directory. 4. Use “rsync” for Backups "rsync" is a useful command for regular backups as it saves time by transferring only the differences between the source and destination. This feature is especially beneficial when creating backups over a network. Example: Shell rsync -avz /path/to/source_directory/ user@remotehost:/path/to/destination_directory/ 5. Use Tab Completion Using tab completion as a habit is faster than manually selecting filenames and pressing Enter. Typing the initial letters of filenames and utilizing Tab completion streamlines the process and is more efficient. 6. Use “man” Pages Instead of reaching the web to find the usage of a command, a quick way would be to use the “man” command to find out the manual of that command. This approach not only saves time but also ensures accuracy, as command options can vary based on the installed version. By accessing the manual directly, you get precise details tailored to your existing version. Example: Shell man ps It will get the manual page for the “ps” command 7. Create Scripts For repetitive tasks, create small shell scripts that chain commands and perform actions based on conditions. This saves time and reduces risks in complex operations. Conclusion In conclusion, becoming familiar with these Linux commands and tips can significantly boost productivity and streamline workflow on the command line. By using techniques like command chaining, context-aware searching, efficient file management, and automation through scripts, users can save time, reduce errors, and optimize their Linux experience.
APIs and SDKs are the bridge to an underlying platform, allowing firms to build applications and integrate your platform into their business processes. Building APIs and SDKs that developers love to use is the key to a successful platform strategy, be it for internal teams or external teams. In the following article, I will provide some of the most effective practices I have seen in the industry. I place these four necessary strategies that should be at the heart of any API/SDK program: simplicity, resilience, community building, and continuous improvement. Prioritize Simplicity Simplicity is the most essential factor to consider while designing APIs and SDKs. Firms are more likely to adopt and stay with you if API and SDK usage is intuitive, well-documented, and easy to plug into other projects. Do not over-engineer or overcomplicate APIs/SDKs. Preferring clarity, consistency, and compliance with industry standards, draft intuitive and user-friendly APIs and SDKs. Create endpoints with concise, descriptive naming best practices that accurately convey their purpose. Codify in-house standards on your entire API or SDK, with appropriate naming conventions and design patterns. Align with widely adopted standards and paradigms, such as RESTful principles, appropriate HTTP methods, language-specific conventions, and secure authentication mechanisms, to provide a seamless and familiar experience for developers. Here are a few good examples to consider: API Design Guidelines APIs Design API Standards Style Guide Blindly following a style guide without considering the unique requirements and goals of your platform can lead to suboptimal outcomes. It is important to strike a balance between catering to developers' needs and doing what's right for the long-term success and viability of your platform. While it might be tempting to fulfill every feature request from your users, you must make hard choices to prioritize the health and maintainability of your platform (the adage applies: "Put on your own oxygen mask first before assisting others"). Nothing erodes trust like a platform that lacks stability, and security or cannot scale, so work hard to find the right balance between a good developer-friendly experience while ensuring the aforementioned criteria are not in peril. Designing for Resilience While designing APIs and SDKs, it is essential to place error handling at its core. To provide a dependable developer experience, a platform needs to have a comprehensive and well-documented error code system that covers a significant range of possible failure scenarios with dozens of unique error codes designed to cover various categories of errors like authentication failures, validation errors, resource not found, rate limiting, and other server-side errors. Furthermore, error messages should not only inform the developer about the nature of an error but offer guidance on how to resolve it. Offer retry mechanisms to developers when dealing with partial failures. Provide them with the means to configure the retry behavior, such as the maximum number of retries and initial retry delay. Additionally, set timeout values to prevent requests to services from hanging or being blocked indefinitely. Allows developers to customize the timeout setting and provides them with a way to gracefully cancel a long-running request. Follow an all-or-nothing approach when it comes to transactional operations. Keep data integrity and consistency in the forefront whenever a batch operation is invoked, either all operations in the batch should succeed or none of them should. The developer should be notified about which items in the batch were successful, and which items were erroneous. Ensure that your APIs and SDKs include robust logging capabilities that can help developers troubleshoot and debug issues. Log relevant information such as request/response details, error messages, and stack traces. Allow developers to configure logging verbosity and opt in/out of logging entirely in production. Define a consistent and clear versioning policy for your APIs and SDKs. Follow semantic versioning. Fostering a Developer Community Building a strong developer community around your APIs and SDKs is critical to drive adoption, educate developers, and promote innovation. Provide comprehensive documentation for your APIs and SDKs that thoroughly covers all they have to offer. Include getting started guides, tutorials, code samples, reference documentation, and more. Build an interactive developer portal that serves as the central hub for all developer-related content. Include features such as API consoles, sandbox environments, and interactive documentation that allow developers to experiment and try out their integrations in a controlled setting. Engage with developers through popular developer platforms, social media, webinars, and in-person workshops. Participate in discussions, answer questions, and provide support for developers who are using your APIs and SDKs. Create an environment where developers can easily provide feedback and help test and improve your offerings. Set up bug trackers, feature requests, and general feedback submission processes. Foster community-driven support by encouraging developers to help each other in forums, establish a community-driven knowledge base, and provide moderation to ensure a positive and inclusive community. Make sure your support team is responsive and knowledgeable, reply to developer questions promptly, and provide value-added responses. Keep a detailed internal knowledge base or a dedicated FAQ section containing solutions to common questions and challenges. This ensures your support and field teams can quickly understand and resolve customer issues, delivering a seamless experience to the developers using your APIs and SDKs. Organize developer events and conferences to gather developers and encourage one-on-one communication. Invite veterans and industry experts to educate and enlighten, and enable developers to present their own projects to learn from one another. Gather feedback, announce features or changes, and bond with your developer community. Growing a thriving developer community ensures you have a supportive environment that cultivates collaboration, education, and innovation, driving your APIs and SDKs to become more popular and successful. Iterate and Improve Develop a structured approach for assessing and ordering the feedback by its effect, urgency, and relationship with your company’s objectives. Regularly consult your development team and stakeholders to review the feedback and determine what changes and features should be implemented into your roadmap. Devote resources and set deadlines to implement the modifications. Ensure your development cycle includes complete testing and quality assurance procedures to uphold the integrity and dependability of your APIs and SDKs. Update your documentation and announce the changes to your developer community. Establish key performance indicators – API adoption rates, developer satisfaction, support ticket response time, for example – to evaluate your changes’ performance. Regularly monitor and assess this data to evaluate the effect of your changes and identify potential improvements. Lastly, build a culture of continuous learning and improvement within your organization. Ensure that your team keeps up with the latest trends in the industry, attends conferences and workshops, and participates in developer communities. Knowledge of the current trends equips you with relevant insights to stay ahead by addressing the current developers’ needs. More importantly, have processes that enable you to iterate and enhance the APIs and SDKs provided. Having a process that can effectively iterate shows developers that you are serious about delivering quality products and can quickly switch to another provider should their expectations be compromised. This way, you build trust and relationships that last, and your platform becomes a reliable and innovative tool that keeps attracting developers in the market. In conclusion, designing developer-friendly APIs and SDKs is a vital element of platform strategy. Prioritize simplicity, resilience, community, and continuous improvement. Remember, developers will only love your platform if they first enjoy using it. Hence, invest in making their experience better, meet their nowadays-changing needs, and enhance their satisfaction. Such actions enable you to get the best out of your platform, introduce more innovations, and thrive in the dynamic technology landscape.
Hello! My name is Roman Burdiuzha. I am a Cloud Architect, Co-Founder, and CTO at Gart Solutions. I have been working in the IT industry for 15 years, a significant part of which has been in management positions. Today I will tell you how I find specialists for my DevSecOps and AppSec teams, what I pay attention to, and how I communicate with job seekers who try to embellish their own achievements during interviews. Starting Point I may surprise some of you, but first of all, I look for employees not on job boards, but in communities, in general chats for IT specialists, and through acquaintances. This way you can find a person with already existing recommendations and make a basic assessment of how suitable he is for you. Not by his resume, but by his real reputation. And you can already know him because you are spinning in the same community. Building the Ideal DevSecOps and AppSec Team: My Hiring Criteria There are general chats in my city (and not only) for IT specialists, where you can simply write: "Guys, hello, I'm doing this and I'm looking for cool specialists to work with me." Then I send the requirements that are currently relevant to me. If all this is not possible, I use the classic options with job boards. Before inviting for an interview, I first pay attention to the following points from the resume and recommendations. Programming Experience I am sure that any security professional in DevSecOps and AppSec must know the code. Ideally, all security professionals should grow out of programmers. You may disagree with me, but DevSecOps and AppSec specialists should work with code to one degree or another, be it some YAML manifests, JSON, various scripts, or just a classic application written in Java, Go, and so on. It is very wrong when a security professional does not know the language in which he is looking for vulnerabilities. You can't look at one line that the scanner highlighted and say: "Yes, indeed, this line is exploitable in this case, or it's false." You need to know the whole project and its structure. If you are not a programmer, you simply will not understand this code. Taking Initiative I want my future employees to be proactive — I mean people who work hard enough, do big tasks, have ambitions, want to achieve, and spend a lot of time on specific tasks. I support people's desire to develop in their field, to advance in the community, and to look for interesting tasks and projects for themselves, including outside of work. And if the resume indicates the corresponding points, I will definitely highlight it as a plus. Work-Life Balance I also pay a lot of attention to this point and I always talk about it during the interview. The presence of hobbies and interests in a person indicates his ability to switch from work to something else, his versatility and not being fixated on one job. It doesn't have to be about active sports, hiking, walking, etc. The main thing is that a person's life has not only work but also life itself. This means that he will not burn out in a couple of years of non-stop work. The ability to rest and be distracted acts as a guarantee of long-term employment relationships. In my experience, there have only been a couple of cases when employees had only work in their lives and nothing more. But I consider them to be unique people. They have been working in this rhythm for a long time, do not burn out, and do not fall into depression. You need to have a certain stamina and character for this. But in 99% of cases, overwork and inability to rest are a guaranteed departure and burnout of the employee in 2-3 years. At the moment, he can do a lot, but I don't need to change people like gloves every couple of years. Education I graduated from postgraduate studies myself, and I think this is more a plus than a minus. You should check the availability of certificates and diplomas of education specified in the resume. Confirmation of qualifications through certificates can indicate the veracity of the declared competencies. It is not easy to study for five years, but at the same time, when you study, you are forced to think in the right direction, analyze complex situations, and develop something that has scientific novelty at present and can be used in the future with benefit for people. And here, in principle, it is the same: you combine common ideas with colleagues and create, for example, progressive DevOps, which allows you to further help people; in particular, in the security of the banking sector. References and Recommendations I ask the applicant to provide contacts of previous employers or colleagues who can give recommendations on his work. If a person worked in the field of information security, then there are usually mutual acquaintances with whom I also communicate and who can confirm his qualifications. What I Look for in an Interview Unfortunately, not all aspects can be clarified at the stage of reading the resume. The applicant may hide some things in order to present themselves in a more favorable light, but more often it is simply impossible to take into account all the points needed by the employer when compiling a resume. Through leading questions in a conversation with the applicant and his stories from previous jobs, I find out if the potential employee has the qualities listed below. Ability To Read It sounds funny, but in fact, it is not such a common quality. A person who can read and analyze can solve almost any problem. I am absolutely convinced of this because I have gone through it myself more than once. Now I try to look for information from many sources, I actively use the same ChatGPT and other similar services just to speed up the work. That is, the more information I push through myself, the more tasks I will solve, and, accordingly, I will be more successful. Sometimes I ask the candidate to find a solution to a complex problem online and provide him with material for analysis, I look at how quickly he can read and conduct a qualitative analysis of the provided article. Analytical Mind There are two processes: decomposition and composition. Programmers usually use the second part. They conduct compositional analysis, that is, they assemble some artifact from the code that is needed for further work. An information security analyst or security specialist uses decomposition. That is, on the contrary, it disassembles the artifact into its components and looks for vulnerabilities. If a programmer creates, then a security specialist disassembles. An analytical mind is needed in the part that is related to how someone else's code works. In the 90s, for example, we talked about disassembling if the code was written in assembler. That is, you have a binary file, and you need to understand how it works. And if you do not analyze all entry and exit points, all processes, and functions that the programmer has developed in this code, then you cannot be sure that the program works as intended. There can be many pitfalls and logical things related to the correct or incorrect operation of the program. For example, there is a function that can be passed a certain amount of data. The programmer can consider this function as some input numerical data that can be passed to it, or this data can be limited by some sequence or length. For example, we enter the card number. It seems like the card number has a certain length. But, at the same time, any analyst and you should understand that instead of a number there can be letters or special characters, and the length may not be the same as the programmer came up with. This also needs to be checked, and all hypotheses need to be analyzed, to look at everything much wider than what is embedded in the business logic and thinking of the programmer who wrote it all. How do you understand that the candidate has an analytical mind? All this is easily clarified at the stage of "talking" with the candidate. You can simply ask questions like: "There is a data sample for process X, which consists of 1000 parameters. You need to determine the most important 30. The analysis task will be solved by 3 groups of analysts. How will you divide these parameters to obtain high efficiency and reliability of the analysis?" Experience Working in a Critical Situation It is desirable that the applicant has experience working in a crunch; for example, if he worked with servers with some kind of large critical load and was on duty. Usually, these are night shifts, evening shifts, on a weekend, when you have to urgently raise and restore something. Such people are very valuable. They really know how to work and have personally gone through different "pains." They are ready to put out fires with you and, most importantly, are highly likely to be more careful than others. I worked for a company that had a lot of students without experience. They very often broke a lot of things, and after that, it was necessary to raise all this. This is, of course, partly a consequence of mentoring. You have to help, develop, and turn students into specialists, but this does not negate the "pain" of correcting mistakes. And until you go through all this with them, they do not become cool. If a person participated in these processes and had the strength and ability to raise and correct, this is very cool. You need to select and take such people for yourself because they clearly know how to work. How To Avoid Being Fooled by Job Seekers Job seekers may overstate their achievements, but this is fairly easy to verify. If a person has the necessary experience, you need to ask them practical questions that are difficult to answer without real experience. For example, I ask about the implementation of a particular practice from DevSecOps, that is, what orchestrator he worked in. In a few words, the applicant should write, for example, a job in which it was all performed, and what tool he used. You can even suggest some keys from this vulnerability scanner and ask what keys and in what aspect you would use to make everything work. Only a specialist who has worked with this can answer these questions. In my opinion, this is the best way to check a person. That is, you need to give small practical tasks that can be solved quickly. It happens that not all applicants have worked and are working with the same as me, and they may have more experience and knowledge. Then it makes sense to find some common questions and points of contact with which we worked together. For example, just list 20 things from the field of information security and ask what the applicant is familiar with, find common points of interest, and then go through them in detail. When an applicant brags about having developments in interviews, it is also better to ask specific questions. If a person tells without hesitation what he has implemented, you can additionally ask him some small details about each item and direction. For example, how did you implement SAST verification, and with what tools? If he tells in detail and, possibly, with some additional nuances related to the settings of a particular scanner, and this fits into the general concept, then the person lived by this and used what he is talking about. Wrapping Up These are all the points that I pay attention to when looking for new people. I hope this information will be useful both for my Team Lead colleagues and for job seekers who will know what qualities they need to develop to successfully pass the interview.
Go through your code and follow the business logic. Whenever a question or doubt arises, there is potential for improvement. Your Code May Come back to You for Various Reasons The infrastructure, environment, or dependencies have evolved You want to reuse your code or logic in another context You need to introduce someone else or present your work before a wider audience The business requirements have changed Some improvements are needed There is a functional bug; etc. There are two, equally valid approaches here — either you fix the issue(s) with minimal effort and move on to the next task, or you take the chance to revisit what you have done, evaluate and possibly improve it, or even decide it is no longer needed, based on the experience and knowledge you have gained in the meantime. The big difference is that when you re-visit your code, you improve your skills as a side effect of doing your daily job. You may consider this a small investment that will pay for itself by increasing your efficiency in the future. A Few Examples Why did I do all this, where can I find the requirements? Developers often context switch between unrelated tasks — you can save time for onboarding yourself and others by maintaining better comments/documentation. A reference to a ticket could do the job, especially if there are multiple tickets. If possible, keep the requirements together with your code, otherwise try to summarize them. Hmm, this part is inefficient! In many cases this happens due to chasing deadlines, blindly copying code around, or not considering the real amount of data during development. You may find yourself retrieving the same data many times too. Writing efficient code always pays off by saving on iterations to improve performance. When you revisit your code, you may find that there are new and better ways to achieve the same goal. Oh, this is brittle — my assumptions may not hold in the future! "This will never happen" — you have heard it so many times at all levels of competence. No comment is needed here — a good reason why you should avoid writing brittle code is that you may want to reuse it in a different context. It's really hard to make no assumptions, but when you revisit your code, you should do your best to make as few assumptions as possible. Also consider that your code may run in different environments, where defaults and conventions may differ — never rely on things like date and number formats, order or completeness of data, availability of configuration or external services, etc. Oops, it is incomplete — it only covers a subset of the business requirements! You have no one to blame — this is your own code. Don't leave it incomplete, because it will come back to you and that always happens at the worst time possible. I'm lost following my own logic ... You definitely hit technical debt — and technical debt is immortal. As you develop professionally, you start doing things in more standard and widely recognized ways, so they are easier to maintain. It is quite tempting not to touch something that works. However, remember that, even if it works, it is only useable in the present context. Unreadable code is not reusable, not to mention it is hard to maintain. Fighting the technical debt pays by saving time and effort by allowing you to reuse code and logic. Uh, it's so big, it will take too much time to improve and I don't have enough time right now! Yet another type of technical debt. In a large and complex piece of code, some parts may appear unreachable in the actual context, making the code even less readable. This could be a problem, but nobody complained so far, so let's wait... Don't trust this line of thinking. The complaints will always come at the worst times. Summary Even when it isn't recognized by management or your peers, the effort of revisiting your own code makes you a better professional, which in turn gives you a better position on the market. Additionally, keeping your code clean and high-quality is satisfying, without the need for someone else's assessment — and being satisfied with your work is a good motivation to keep going. For myself, I would summarize all of the above in a single phrase — don't copy code but revisit it, especially if it's your own. It's like re-entering your new password when you change it — it can help you memorize it better, even if it's easier to copy and paste the same string twice. Nothing stops you from doing all this when developing new code too.
Executive engineers are crucial in directing a technology-driven organization’s strategic direction and technological innovation. As a staff engineer, it is essential to understand the significance of executive engineering. It goes beyond recognizing the hierarchy within an engineering department to appreciating the profound impact these roles have on individual contributors’ day-to-day technical work and long-term career development. Staff engineers are deep technical experts who focus on solving complex technical challenges and defining architectural pathways for projects. However, their success is closely linked to the broader engineering strategy set by the executive team. This strategy determines staff engineers' priorities, technologies, and methodologies. Therefore, aligning executive decisions and technical implementation is essential for the engineering team to function effectively and efficiently. Executive engineers, such as Chief Technology Officers (CTOs) and Vice Presidents (VPs) of Engineering, extend beyond mere technical oversight; they embody the bridge between cutting-edge engineering practices and business outcomes. They are tasked with anticipating technological trends and aligning them with the business’s needs and market demands. In doing so, they ensure that the engineering teams are not just functional but are proactive agents of innovation and growth. For staff engineers, the strategies and decisions made at the executive level deeply influence their work environment, the tools they use, the scope of their projects, and their approach to innovation. Thus, understanding and engaging with executive engineering is essential for staff engineers who aspire to contribute significantly to their organizations and potentially advance into leadership roles. In this dynamic, the relationship between staff and executive engineers becomes a critical axis around which much of the company’s success revolves. This introduction aims to explore why executive engineering is vital from the staff engineer’s perspective and how it shapes an organization's technological and operational landscape. Hierarchal Structure of Engineering Roles In the hierarchical structure of engineering roles, understanding each position’s unique responsibilities and contributions—staff engineer, engineering manager, and engineering executive—is crucial for effective career progression and organizational success. Staff Engineers are primarily responsible for high-level technical problem-solving and creating architectural blueprints. They guide projects technically but usually only indirectly manage people. Engineering Managers oversee teams, focusing on managing personnel and ensuring that projects align with the organizational goals. They act as the bridge between the technical team and the broader business objectives. Engineering Executives, such as CTOs or VPs of Engineering, shape the strategic vision of the technology department and ensure its alignment with the company’s overarching goals. They are responsible for high-level decisions about the direction of technology and infrastructure, often dealing with cross-departmental coordination and external business concerns. The connection between a staff engineer and an engineering executive is pivotal in crafting and executing an effective strategy. While executives set the strategic direction, staff engineers are instrumental in grounding this strategy with their deep technical expertise and practical insights. This collaboration ensures that the strategic initiatives are visionary and technically feasible, enabling the organization to innovate while maintaining robust operational standards. The Engineering Executive’s Primer: Impactful Technical Leadership Will Larson’s book, The Engineering Executive’s Primer: Impactful Technical Leadership, is an essential guide for those aspiring to or currently in engineering leadership roles. With his extensive experience as a CTO, Larson offers a roadmap from securing an executive position to mastering the complexities of technical and strategic leadership in engineering. Key Insights From the Book Transitioning to Leadership Larson discusses the nuances of obtaining an engineering executive role, from negotiation to the critical first steps post-hire. This guidance is vital for engineers transitioning from technical to executive positions, helping them avoid common pitfalls. Strategic Planning and Communication The book outlines how to run engineering planning processes and maintain clear organizational communication effectively. These skills are essential for aligning various engineering activities with company goals and facilitating inter-departmental collaboration. Operational Excellence Larson delves into managing crucial meetings, performance management systems, and new engineers’ strategic hiring and onboarding. These processes are fundamental to maintaining a productive engineering team and fostering a high-performance culture. Personal Management Understanding the importance of managing one’s priorities and energy is another book focus, which is often overlooked in technical fields. Larson provides strategies for staying effective and resilient in the face of challenges. Navigational Tools for Executive Challenges From mergers and acquisitions to interacting with CEOs and peer executives, the book provides insights into the broader corporate interactions an engineering executive will navigate. Conclusion The engineering executive’s role is pivotal in setting a vision that integrates with the organization’s strategic objectives. Still, the symbiotic relationship with staff engineers brings this vision to fruition. Larson’s The Engineering Executive’s Primer is an invaluable resource for engineers at all levels, especially those aiming to bridge the gap between deep technical expertise and impactful leadership. Through this primer, engineering leaders can learn to manage, inspire, and drive technological innovation within their companies.
DZone is proud to announce our media partnership with PlatformCon 2024, one of the world’s largest platform engineering events. PlatformCon runs from June 10-14, 2024, and is primarily a virtual event, but there will also be a large live event in London, as well as some satellite events in other major cities. This event brings together a vibrant community of the most influential practitioners in the platform engineering and DevOps space to discuss methodologies, recommendations, challenges, and everything in between to help you build the perfect platform. Need help convincing your manager (or yourself) that this is an indispensable conference to attend? You’ve come to the right place! Below are three key reasons why you should attend PlatformCon24. 1. Platform Engineering Is a Hot Topic in 2024 So, what is platform engineering? In his most recent article on DZone, Mirco Hering describes a platform engineer as someone who plays three roles: the technical architect, the community enabler, and the product manager. This multifaceted approach helps to better streamline development practices and take the load off of software engineers and allow for each team to be more in sync with their deployment cycles. In 2024, we’ve seen an increase in articles and conversations on DZone around platform engineering, how it relates to DevOps, and the top considerations when looking to better optimize your development processes. Developers want to know more about this, and this conference is a perfect place to learn from the experts, and connect with other like minded individuals in the space. 2. Learn From Platform Engineering and DevOps Experts Have you seen the lineup of speakers for PlatformCon this year?! Industry leaders will help you navigate this space and key conference themes, with prominent names including Kelsey Hightower, Gregor Hohpe, Charity Majors Manuel Pais, Nicki Watt, Brian Finster, Mallory Haigh, and more. At DZone, we value peer-to-peer knowledge sharing, and find that the best way for developers to learn about new tech initiatives, methodologies, and approaches to existing practices is through the experiences of their peers. And this is exactly what PlatformCon is all about! This conference also gives attendees unparalleled access to the speakers via Slack channels. What better way to navigate the evolving world of platform engineering than to learn from the experts who are leading the way? 3. Embark on a Custom DevOps + Platform Engineering Journey As we mentioned earlier, platform engineering is multifaceted, and with that, the approaches and practices are as well. The five conference tracks highlighted below are intended to allow you to tailor your experience and platform engineering journey. Stories: This track enables you to learn from the practitioners who are building platforms at their organizations and will provide you with adoption tips of your own. Culture: This track focuses on the relationships between all of the developers and teams involved in platform engineering — from DevOps and site reliability engineers to software architects and more. Toolbox: This track focuses on the technical components of developer platforms, and dives into what tools and technologies developers use to solve for specific problems. Conversations will focus around IaC, GitOps, Kubernetes, and more. Impact: This track is all about the business side of platform engineering. It will dive into the key metrics that C-suite executives measure and will offer advice on how to get leadership buy-in to build a developer platform. Blueprint: This track will give you the foundation to build your own developer platform, covering important reference architectures and key design considerations. Register Today to Perfect Your Platform Now that we’ve shared multiple reasons why you should attend PlatformCon 2024, we’ll leave you with one final motivation — it’s free to register and attend! This conference is the perfect opportunity to connect with like-minded people in the developer space, learn more about platform engineering, and help determine the best next steps in your developer platform journey. Learn more about how to register here. See you there!
Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, Enterprise AI: The Emerging Landscape of Knowledge Engineering. Is AI taking our jobs? Let's hope not, because we don't want devs taking other jobs. They prefer to be behind computers. This is an excerpt from DZone's 2024 Trend Report,Enterprise AI: The Emerging Landscape of Knowledge Engineering.Read the Free Report
I recently read an article about the worst kind of programmer. I agree with the basic idea, but I wanted to add my thoughts on it. I have seen, over time, that developers seem invested in learning new things for the sake of new things, rather than getting better at existing approaches. Programming is like everything else — new is not always better. I have a Honda CRV that is not as easy to use as some cars I used to own before touch interfaces became popular. The touch screen sometimes acts like I'm pressing various places on the screen when I'm not, making beeping noises and flipping screens randomly. I have to stop and turn the car off and on to stop it. It has a config screen with every option disabled. It has bizarre logic about locking and unlocking the doors, that I have never fully figured out. I often wonder if devs who make car software have a driver's license. If I tried asking 100 programmers the following question, chances are very few of them, if any, could answer it without a web search: Bob just completed programming school, and heard about MVC, but is unsure how to tell which code should be modeled, which code should be viewed, and which code should be controlled. How would you explain the MVC division of code to Bob? It's not a genius question, it's really very basic stuff. Here are some other good questions about other very basic stuff: 1. Why Did Developers Decide in REST That POST Is Created and Put Is Updated? The HTTP RFCs have always stated that PUT is created or updated to a resource on the server such that a GET on that resource returns what was PUT, and that POST is basically a grab bag of whatever does not fit into other verbs. The RFCs used to say that a POST URL is indicative of an operation, now they just say POST is whatever you say it is. Developers often talk about the REST usage of POST and PUT like Jesus Christ himself dictated this usage, like there is no argument about it. I have never seen any legitimate reason why PUT cannot be created or updated as the RFC says, and POST can be for non-CRUD stuff. Any real, complex system that is driven by customer demand for features is highly likely to have some operations that are not CRUD — integrations with other systems, calculations, searches (eg, a filter box that shows matches as you type, find results for a search based on input fields), and so on. By reserving POST for these kinds of other operations, you can immediately identify anything that isn't CRUD. Otherwise, you wind up with two usages of POST — mostly for create, but here and there for other stuff. 2. Why Do Java Developers Insist on Spring and JPA for Absolutely Every Java Project Without Question? Arguably, a microservice project should be, well, you know, micro. Micro is defined as an adjective that means extremely small. When Spring and JPA take up over 200MB of memory and take 10 seconds to fire up a near-empty project that barely writes one row to a table, I'm not seeing the micro here. Call me crazy, but maybe micro should be applied to the whole approach, not just the line count: the amount of memory, the amount of handwritten code, the amount of time a new hire takes to understand how the code works, etc. You don't have to be a freak about it, trying 10 languages to see which uses the least amount of RAM, just be reasonable about it. In this case, Spring and JPA were designed for monolithic development, where you might have problems like the following: A constructor is referred to 100 times in the code. Adding a new field requires modifying all 100 constructor calls to provide the new field, but only one of those calls actually uses the new field. So dependency injection is useful. There are thousands of tables, with tens of thousands of queries, that need to be supported in multiple databases (eg, Oracle and MSSQL), with use cases like multi-tenancy and/or sharding. There comes a point where it is just too much to do some other way, and JPA is very helpful. 3. Why Does Every Web App Require Heavy Amounts of JS Code? When I started in this business, we used JSP (Java Server Pages), which is a type of SSR (Server Side Rendering). Basically, an HTML templating system that can fill in the slots with values that usually come from a database. It means when users click on a button, the whole page reloads, which these days is fast enough for it to be a brief sort of blink. The bank I have used since about 2009 still uses some sort of SSR. As a customer, I don't care it's a bit blinky. It responds in about a second after each click, and I'm only going to do maybe 12-page loads in a session before logging out. I can't find any complaint on the web about it. I saw a project "upgrade" from JSP to Angular. They had a lot of uncommented JSP code that nobody really knew how it worked, which became Angular code nobody really knew how it worked. Some people would add new business logic to Angular, some would add it to Java code, and nobody leading the project thought it was a good idea to make a decision about this. Nobody ever explained why this upgrade was of any benefit, or what it would do. The new features being added afterward were no more or less complex than what was there before, so continuing to use JSP would not have posed any problems. It appeared to be an upgrade for the sake of an upgrade. 4. Why Is Everything New Automatically So Much Better Than Older Approaches? What is wrong with the tools used 10 or 15 years ago? After all, everything else works this way. Sure, we have cars with touch screens now, but they still use gas, tires, cloth or leather seats, a glove box, a steering wheel, glass, etc. The parts you touch daily to drive are basically the same as decades ago, with a few exceptions like the touch screen and electric engines. Why can't we just use a simple way of mapping SQL tables to objects, like a code generator? Why can't we still use HTML templating systems for a line of business apps that are mostly CRUD? Why can't we use approaches that are only as complex as required for the system at hand? I haven't seen any real improvements in newer languages or tooling that are significantly better in real-world usage, with a few exceptions like using containers. 5. Do You Think Other Industries Work This Way? I can tell you right now if engineers built stuff like programmers do, I would never get in a car, walk under a bridge, or board an airplane. If doctors worked that way, I'd be mortally afraid every visit. So why do we do things this way? Is this really the best we can do? I worked with a guy who asked shortly after being hired "Why the f do we have a mono repo?". When I asked what was wrong with a monorepo, he was unable to give any answer, but convinced management how this has to change pronto, apparently convinced with almightly passion all microservice projects must be structured as separate repos per service. Not sure if it was him or someone else, but somehow it was determined that each project must be deployed in its own container. These decisions were detrimental to the project in the following ways: One project was a definition of all objects to be sent over the wire. If service A object is updated to require a new field, there is no compile error anywhere to show the need to update constructor calls. If service B calls A to create objects, and nobody thinks of this, then probably only service A is updated to provide the new required field, and a subtle hard-to-find bug exists, that might take a while for anyone to even notice. Your average corporate dev box can handle maybe 15 containers before flopping over and gasping for air. So we quickly lost local development in one of those unrecoverable ways where the team would never get it back. Every new dev would have to check out dozens of repos. No dependency information between repos was tracked anywhere, making it unknowable which subset of services has to be run to stand up service X to work on that one service. Combined with the inability to run all repos locally yields two equally sucktastic solutions to working on service X: Use trial and error to figure out which subset stands up X and run it locally Deploy every code change to a dev server When Alex talks about programmers using hugely complex solutions of the sort he describes, it sounds to me like devs who basically jerk off to everything new and cool. This is very common in this business, every team has people like that in it. That isn't necessarily a big problem by itself, but when combined with the inability/unwillingness to ensure other devs are fully capable of maintaining the system, and possibly the arrogance of "everything I say is best", and/or "only I can maintain this system," that's the killer combination that does far more harm than good.
I remember back when mobile devices started to gain momentum and popularity. While I was excited about a way to stay in touch with friends and family, I was far less excited about limits being placed on call length minutes and the number of text messages I could utilize … before being forced to pay more. Believe it or not, the #646 (#MIN) and #674 (#MSG) contact entries were still lingering in my address book until a recent clean-up effort. At one time, those numbers provided a handy mechanism to determine how close I was to hitting the monthly limits enforced by my service provider. Along some very similar lines, I recently found myself in an interesting position as a software engineer – figuring out how to log less to avoid exceeding log ingestion limits set by our observability platform provider. I began to wonder how much longer this paradigm was going to last. The Toil of Evaluating Logs for Ingestion I remember the first time my project team was contacted because log ingestion thresholds were exceeding the expected limit with our observability partner. A collection of new RESTful services had recently been deployed in order to replace an aging monolith. From a supportability perspective, our team had made a conscious effort to provide the production support team with a great deal of logging – in the event the services did not perform as expected. There were more edge cases than there were regression test coverage, so we were expecting alternative flows to trigger results that would require additional debugging if they did not process as expected. Like most cases, the project had aggressive deadlines that could not be missed. When we were instructed to “log less” an unplanned effort became our priority. The problem was, we weren’t 100% certain how best to proceed. We didn’t know what components were in a better state of validation (to have their logs reduced), and we weren’t exactly sure how much logging we would need to remove to no longer exceed the threshold. To our team, this effort was a great example of what has become known as toil: “Toil is the kind of work that tends to be manual, repetitive, automatable, tactical, devoid of enduring value, and that scales linearly as a service grows.” – Eric Harvieux (Google Site Reliability Engineering) Every minute our team spent on reducing the amount of logs ingested into the observability platform came at the expense of delivering fewer features and functionality for our services. After all, this was our first of many planned releases. Seeking a “Log Whatever You Feel Necessary” Approach What our team really needed was a scenario where our observability partner was fully invested in the success of our project. In this case, it would translate to a “log whatever you feel necessary” approach. Those who have walked this path before will likely be thinking “this is where JV has finally lost his mind.” Stay with me here as I think I am on to something big. Unfortunately, the current expectation is that the observability platform can place limits on the amount of logs that can be ingested. The sad part of this approach is that, in doing so, observability platforms put their needs ahead of their customers – who are relying on and paying for their services. This is really no different from a time when I relied on the #MIN and #MSG contacts in my phone to make sure I lived within the limits placed on me by my mobile service provider. Eventually, my mobile carrier removed those limits, allowing me to use their services in a manner that made me successful. The bottom line here is that consumers leveraging observability platforms should be able to ingest whatever they feel is important to support their customers, products, and services. It’s up to the observability platforms to accommodate the associated challenges as customers desire to ingest more. This is just like how we engineer our services in a demand-driven world. I cannot imagine telling my customer, “Sorry, but you’ve given us too much to process this month.” Pay for Your Demand – Not Ingestion The better approach here is the concept of paying for insights and not limiting the actual log ingestion. After all, this is 2024 – a time when we all should be used to handling massive quantities of data. The “pay for your demand – not ingestion” concept has been considered a “miss” in the observability industry… until recently when I read that Sumo Logic has disrupted the DevSecOps world by removing limits on log ingestion. This market-disruptor approach embraces the concept of “log whatever you feel necessary” with a north star focused on eliminating silos of log data that were either disabled or skipped due to ingestion thresholds. Once ingested, AI/ML algorithms help identify and diagnose issues – even before they surface as incidents and service interruptions. Sumo Logic is taking on the burden of supporting additional data because they realize that customers are willing to pay a fair price for the insights gained from their approach. So what does this new strategy to observability cost expectations look like? It can be difficult to pinpoint exactly, but as an example, if your small-to-medium organization is producing an average of 25 MB of log data for ingestion per hour, this could translate into an immediate 10-20% savings (using Sumo Logic’s price estimator) on your observability bill. In taking this approach, every single log is available in a custom-built platform that scales along with an entity’s observability growth. As a result, AI/ML features can draw upon this information instantly to help diagnose problems – even before they surface with consumers. When I think about the project I mentioned above, I truly believe both my team and the production support team would have been able to detect anomalies faster than what we were forced to implement. Instead, we had to react to unexpected incidents that impacted the customer’s experience. Conclusion I was able to delete the #MIN and #MSG entries from my address book because my mobile provider eliminated those limits, providing a better experience for me, their customer. My readers may recall that I have been focused on the following mission statement, which I feel can apply to any IT professional: “Focus your time on delivering features/functionality that extends the value of your intellectual property. Leverage frameworks, products, and services for everything else.” – J. Vester In 2023, I also started thinking hard about toil and making a conscious effort to look for ways to avoid or eliminate this annoying productivity killer. The concept of “zero dollar ingest” has disrupted the observability market by taking a lead from the mobile service provider's playbook. Eliminating log ingestion thresholds puts customers in a better position to be successful with their own customers, products, and services (learn more about Sumo Logic’s project here). From my perspective, not only does this adhere to my mission statement, it provides a toil-free solution to the problem of log ingestion, data volume, and scale. Have a really great day!
Navigating the intricate world of software development is not merely a solitary pursuit; it's a collaborative journey where seasoned engineers play a pivotal role as mentors. Drawing from my personal experiences in the industry, which spans over a decade, I embark on a thoughtful exploration of effective mentorship in software development. In this post, I'll delve into the profound significance of mentorship, share insightful anecdotes from my own journey, and offer actionable tips for senior engineers eager to become impactful mentors. The Crucial Role of Mentorship in Software Development Mentorship in software development is akin to a dynamic dance between experienced professionals and those at the inception of their careers. It goes beyond the traditional hierarchical structures, serving as a conduit for the exchange of knowledge, experiences, and guidance. The landscape of software development, with its ever-evolving technologies and methodologies, makes effective mentorship indispensable. 1. Knowledge Transfer Mentorship acts as a bridge for the transfer of tacit knowledge, the kind that textbooks and online courses can't encapsulate. The insights, best practices, and practical wisdom that mentors impart significantly accelerate the learning curve for junior engineers. 2. Career Guidance Beyond technical skills, mentorship extends to offering invaluable career guidance. Navigating the complex terrain of the tech industry demands insights into various career paths, industry trends, and strategies for professional development – areas where a mentor's compass proves invaluable. 3. Personal Development Mentorship is not confined to the professional realm; it encompasses personal development. Mentors often assume the role of career coaches, helping mentees cultivate essential soft skills, navigate workplace dynamics, and foster a growth mindset. Journeying Through Mentorship: Insights from Personal Experiences Having transitioned from a managerial role at a junior level to senior management over my extensive 12+ years in the software development industry, mentorship has been an intrinsic part of my professional narrative. Witnessing the growth of junior engineers, celebrating their achievements, and understanding how mentorship contributes to the collective advancement of the tech community has been a source of profound satisfaction. 1. Fostering a Growth Mindset A key lesson from my mentoring experiences is the significance of cultivating a growth mindset. Encouraging junior engineers to view challenges as opportunities for learning, providing constructive feedback, and celebrating their achievements create a positive learning environment. 2. Tailoring Communication Styles Effective mentorship requires the ability to tailor communication styles to individual needs. Recognizing that some engineers thrive on detailed technical explanations while others benefit from practical examples is crucial for effective knowledge transfer. 3. Nurturing Confidence Building confidence in junior engineers is a cornerstone of effective mentorship. Establishing an environment where they feel safe to ask questions, make mistakes, and iterate on their work instills confidence. As a mentor, instilling belief in their abilities is as crucial as imparting technical knowledge. 4. Setting Realistic Goals Goal-setting is integral to mentorship. Establishing realistic short-term and long-term goals helps junior engineers track their progress and provides a roadmap for their professional development. These goals should align with their interests and aspirations. 5. Encouraging Autonomy While mentorship involves guidance, it is equally crucial to encourage autonomy. Empowering junior engineers to take ownership of their projects, make decisions, and learn from the outcomes instills a sense of responsibility and independence. Practical Tips for Effective Mentorship in Software Development Now that we've explored the profound significance of mentorship and gleaned insights from personal experiences, let's distill these lessons into actionable tips for senior engineers aspiring to be effective mentors in the dynamic realm of software development. 1. Establish Clear Communication Channels Foster open and transparent communication channels. Regular check-ins, one-on-one meetings, and feedback sessions provide a structured platform for mentorship. 2. Understand Individual Learning Styles Recognize that each mentee has a unique learning style. Tailor your approach to match their preferences, whether they thrive on hands-on coding sessions or prefer conceptual discussions. 3. Share Personal Experiences Personal anecdotes can be powerful teaching tools. Share your experiences, including challenges faced and lessons learned. This creates a relatable context for mentees to draw insights from. 4. Encourage Continuous Learning Foster a culture of continuous learning. Introduce mentees to relevant resources, suggest books, online courses, or workshops, and encourage participation in industry events. 5. Provide Constructive Feedback Constructive feedback is instrumental in professional growth. Frame feedback positively, focusing on areas of improvement while acknowledging accomplishments. This approach fosters a constructive learning environment. 6. Set Clear Goals and Expectations Define clear goals and expectations for mentorship. Whether it's specific technical skills, project milestones, or career aspirations, having a roadmap provides direction for both mentor and mentee. 7. Create a Safe Space for Questions Ensure mentees feel comfortable asking questions and seeking clarification. Creating a safe space for open dialogue promotes a culture of continuous learning. 8. Encourage Networking and Collaboration Facilitate opportunities for mentees to network with professionals in the industry. Encouraging collaboration on projects and fostering a sense of community contributes to a broader understanding of the tech landscape. 9. Be Adaptable Be adaptable in your mentoring approach. Recognize that the needs and goals of mentees may evolve over time. Being flexible ensures mentorship remains relevant to their changing circumstances. 10. Lead by Example As a mentor, lead by example. Demonstrate the qualities and work ethic you encourage in your mentees. Your actions will serve as a model for their own professional conduct. Conclusion Effective mentorship in software development is an art that demands a blend of technical expertise, interpersonal skills, and a genuine passion for guiding the next generation of engineers. As a senior engineer, embracing the role of a mentor is not just a responsibility but an opportunity to contribute to the collective growth of the tech community. By sharing experiences, fostering a growth mindset, and providing personalized guidance, senior engineers can leave an indelible mark on the careers of those they mentor. The legacy of effective mentorship extends beyond individual achievements, influencing the trajectory of the entire software development landscape. In the dynamic realm of technology, mentorship stands as a cornerstone for progress and innovation.
Miguel Garcia
Sr Engineering Director,
Factorial
Jade Rubick
Engineering advisor,
Jade Rubick Consulting LLC
Manas Dash
Software Development Engineer,
TESCO
Scott Sosna
Senior Software Engineer II,
Datasite