Image

Slider 1

Image

Hanan Zakai|2 years

The end of (Stormy) 2022 is coming

October 2022 is bringing with it enhanced degrees of difficulty and uncertainty, the financial climate in general, and especially the high-tech industry is turbulent and unclear.

The beginning of Q4 is usually crunch time for most companies. It was during my tenure at IBM that I took the crash course on this subject. There is no better school for “End of year” strain than the one at a publicly traded company and if it’s an international corporate traded on NASDAQ even better. There is no tomorrow after December 31st.

All the Milestones, releases, demos, POCs that you committed to delivering throughout the year and postponed/forwarded to the next month must be delivered. All 2022 related investments, bonuses, and terminations… are knocking on your door and as always, time and resources’ availability are not your allies.

2022’s end is not business as usual, as in the 1939 “The Wizard of Oz”, Dorothy says to her dog at one point: “Toto, I’ve got a feeling we’re not in Kansas anymore.” Recent management research and publications are addressing the fact that traditional/textbooks/best practices of business strategy (and tactics) are becoming futile once facing the fast pace of change and continuous uncertainty that we’re facing.

October 2022 is bringing with it an enhanced degree of difficulty and uncertainty, the financial climate in general, and especially the high-tech industry is turbulent and unclear. This environment is adding two crucial parameters to the equation. Imminent demand to economize which is corporate slang to cut costs dramatically. The demand to stretch the “runway” is amplified by the second variable which is the lack of visibility to 2023 budgets, information that will enable us to take the necessary actions and investments that will not become obsolete on January the 1st. The new (business) world order requires us to be more agile, resilient, and decisive.

Paving course of action will start with mapping out our situation, starting with our 2022 “commitments” and available resources and transforming them into 2-3 alternative high-level release plans. We should incorporate issues like the order of importance and levels of risk. Once articulated, we will perform a crucial step which is foretelling the worst-case scenario for each alternative. This action will inhibit the Project Planning Fallacy (a common bias that implies, however long you think you need to do a task, you will actually need longer. Regardless of how many times you have done the task before, or how deep your expert knowledge is). In regular times this phenomenon has damaging potential. During Q4, it becomes extremely critical, there will be no overtime after December 31st. The conclusion of the process will include the selection of the optimal release plan and the construction of a detailed release plan (Sprints level).

At this point in time, missions are prioritized and the list of the required resources is established. The first choice is usually diverting resources from less crucial activities, if applicable, great. Nonetheless, we’re required to double-check that we’re not inflicting over “less short-term” (Q1 release dates are right around the corner) for the short-term demand. If this solution isn’t feasible the next step is checking our recruitment pipeline to review whether there are candidates in advanced stages and can be signed in the next week or so. The “usual” challenge with this path is that even if there are potential recruits their notice time plus onboarding duration (30 – 45 days) will prohibit any real Q4 contribution. This year “unusually” there is a chance that there is no open position for recruitment chances are slimmer than regular.

Once internal alternatives are exhausted, we should consider an external team enhancement. To those of you that are puzzled by the inconsistency between “external” and “reduced cost,” the answer is: consider a team that is onboarded ASAP and dismissed at the end of the year. No residual effect on other R&D commitments or recruitment/notice costs. If you want to go the extra mile, you may even consider going offshore. The issue with offshore development at the end of the year is that usually they are one week short at the end of December, one very crucial week. On the other hand, there are hybrid Israeli-offshore solutions that will enable us to attain reduced costs and the additional working week of the 25th.

The team is established, the backlog is set, and the first sprint commenced. There is an additional factor that can increase the probability of success. We should establish process transparency with our team. Maximizing knowledge of the situations and the context will enable the team members to both adapt and step up for the occasion.

At times like these, I recommend embracing the legendary basketball coach and leadership guru, John Wooden’s words of wisdom “Do not let what you cannot do interfere with what you can do”.

It’s time for operational efficiency?, click for more information on how CodeValue cab assist you with OPERATIONAL EFFICIENCY

Published by Hanan Zakai

VP Customers & Division Manager @CodeValue

Image

Leehee Gerti, VP Marketing @CodeValue|2 years

Architecture Next 2022

It has been 2 years since we’ve had the pleasure to meet and greet in person, and the long wait was worth it! over 420 people gathered for a full day of invigorating talks, mingling, and some good food.

What was it all about?

Time brings change, and COVID has played its role as a catalyst in that matter. Recent effects have led to an unprecedented shift to remote-first work culture and accelerated multi-cloud adoption, not only in terms of harnessing tools and platforms to be used to serve one’s business but also in the products and systems that the company builds, demanding elevated requirements regarding hybrid and cross-platform scenarios

In the software industry, we are expected to be agile and adapt to changes quickly and effectively. This isn’t related merely to development but spans the entire business, product, R&D, UX/UI, architecture, technology, development, and DevOps. 

The need for building software that can operate on and/or integrate with different platforms is rising steadily. New levels of productization, process, and automation are required to meet the new challenges at a high pace, as well as find effective ways to scale MVP products and R&D. 

Therefore, our job as leaders, architects, and engineers, to figure out how to build and best use technologies, tools, and platforms to meet such diverse needs and scenarios, has become more and more challenging. 

This was the fifth consecutive year of CodeValue’s Architecture Next conference. Throughout the day we discuss revolutionary and innovative concepts, technologies, and tools while showing how these can be leveraged and applied to make business, processes, and systems better. 

General assembly

The conference day was launched with an introduction by SpeedValue’s & CodeValue’s CEO Tali Shem Tov and Chairman Ayal Zylberman. Tali & Ayal delivered a brief talk about the changes we have all experienced during the last 2 years and what those changes entailed for our industry.

Keynote: Architecture Stories from the Trenches – Alon Fliess

The keynote session was delivered by Alon Fliess, CodeValue’s Chief Architect that shared with the audience his experiences of over 30 years of significant software development, design, and architecture projects for global leading and cutting-edge companies. 

Software Architecture in the Multi-Cloud Era – Amir Zuker, Rotem Barda (Vayyar), Barak Mor

This talk is about building systems with respect to multi-cloud, focusing on architecture and technological concepts in addition to business aspects. One of the key principles to achieve it is to build the right abstractions.

How do you do that though? There are so many options to choose from, what are they and how do you implement them? In this session, we tackled these subjects head-on while sharing real interesting demos in the process. Furthermore, we presented a real-world case study of one of our existing customers, Vayyar, and discussed our journey together in transforming their business, product, and technology towards multi-cloud.

Experts Panel

Alon Fliess, Amir Zuker, Hanan Zakai, Amit Kinor, Tomer Karasik, Nir Dobovizki, Eran Barghil.


Executive Track

The Perfect Host – An Ongoing Story

Alon Fliess, Chief Architect at CodeValue (MVP & Microsoft Regional Director).

Web3: From A to S (Security)

Tal Be’ery, CTO Zengo

From Offshore to Global Delivery

Tali Shem Tov, CEO & Co-owner, CodeValue & Esti Felba Hermesh, Director of Global Delivery, CodeValue.

Artificial Intelligence, Machine Learning (and when not to use them)

Nir Dobovizki, Senior Consultant and Software Architect & Backend Practice Lead  @CodeValue


Technologies Track

Composable Components – Play Application Lego

Tomer Karasik, Technical Lead and Software Architect at CodeValue & Ilya Holtz, Senior Full Stack Developer at CodeValue

Building Modern IoT Data Pipelines

Alon Amsalem, Software Architect & CodeValue

Micro Front End – Web-Components in Practice

Yehuda Buzaglo, Senior frontend @CodeValue

Image

Leehee Gerti, Director of Marketing @CodeValue|3 years

Architecture Next 2021

Digital Transformation is one of the most profound changes happening in the technological world around us. More businesses understand that they must level up their tech strategy or be left behind. With a massive amount of cloud, AI/ML, and other emerging technologies, software professionals and decision-makers have difficulty keeping up to date.

How can we achieve Digital Transformation? How can we translate those high-level principles and fancy words to ideas and plans to implement in our software? This is what this year’s Architecture Next was for.

At Architecture Next 2021, we discussed revolutionary concepts and tools for the fourth consecutive year and showed you how they can be applied towards making your next software system a better one. We saw how you could implement Digital Transformation in your software systems and how you could utilize your software architecture to accomplish more.

General assembly

The conference day was launched with an introduction by CodeValue’s CEO, Tali Shem Tov. Tali delivered a brief talk about what is Digital Transformation and where does it meet CodeValue’s offering.

Keynote: the IDF’s Journey to the cloud

The keynote session was delivered by the guest speaker “Merav” An officer in the IDF’s Digital Transformation Directorate who took us along the IDF’s journey to the cloud.


Executive Track

Digital Transformation – Buzzword or Reality

The first session on this track was given by Alon Fliess, Chief Architect at CodeValue (MVP & Microsoft Regional Director). In his session, Alon states that there are only two types of organizations, those that already realized that they are software shops and those that haven’t. This introductory session discusses the digital transformation revolution, what it is? and what any organization should do about it? Alon discusses the analysis process, the effect on the products or services, the human resource, and the technology perspectives.


Designing Products in the Digital Transformation Era

The second session was given by Eyal Livne Senior User Experience Architect at CodeValue. In his talk, Eyal introduces the CodeValue workshop as the flagship ‘getting started’ method for initiating a successful digital transformation. 


Application Evolution Strategy

Eran Stiller, CodeValue’s CTO gave the third session on this track, in which he reviewed the technical methods we have to modernize our software systems. He reviewed the questions that we should ask ourselves and the strategies that we can employ. Starting from lift & shift through containerization to cloud-native apps – He’s taking you on a journey that’s relevant for any modern software’s stakeholder.


The IoT Transformation and What it Means to You

The 4th session on this track we had the pleasure to hear Nir Dobovizky, a Senior Consultant and Software Architect at CodeValue. In his talk, Nir covered why IoT is as important as the hype says and what it means for your business


What Can You Do When Your Release Plan is Being Concluded at the HR Office?

To conclude the Executive track we heard Hanan Zakai CodeValue’s Technology Division Manager shading light on the lessons learned from Andi grove (the legendary Intel’s former CEO), the competition between Netflix & Blockbuster, and the Challenger’s crash disaster to articulate the real recruitment challenge and its magnitude and establish the means to face them and even create new opportunities.

Modern Technologies For Digital Transformation Track

State in Stateless Serverless Functions

To kick off the “hands-on” track, Alex a Software Architect at CodeValue, talked about how we can manage state in a stateless, serverless environment on Azure, by utilizing Azure Durable Functions and how we can use the eco-system to build entire systems, completely serverless. 


How I Built a ML-human Hybrid Workflow Using Computer Vision

The second session in this track was given by CodeValue’s Amir Shitrit a Software Architect at CodeValue. In this talk, Amir demonstrated how he built business workflows using the joint effort of humans and software to automate those boring tasks, while compensating for the inaccuracy of ML with human intervention.


We Come in Peace: Hybrid Development with WebAssembly

Following Amir, Maayan Hanin a prominent Software Architect at CodeValue examined the relationship between WebAssembly, JavaScript, TypeScript, the browser, and other hosting environments.


Will the Real Public API Please Stand Up?

Amir Zuker is a Senior Software Architect and our Web & Mobile Division Leade. Amir concluded this track with a discussion about authoring Public API’s between systems, be that different parts within the same distributed system or a fully blown real-world public API and everything in between.


Panel -Architecture for Digital Transformation

The topping on the ice cream was the Digital Transformation experts panel, hositng our own experts: Alon, Eran, Amir, Maayan, Nir, Eyal & Hanan . In the panel, the experts talked about all things Digital Trnasformation and answered questions

We are here for you

Need Consulting or development services? Contact us via the form below. in the meantime, thank you and see you next year!

Image

Ronen Sror, VP Cloud Services Solutions|3 years

ההחלטה הראשונה שצריך לקבל במסע אל הענן הציבורי

חברות רבות עוברות לענן הציבורי, חלק ניכר מהן עושות זאת לבד, עם ידע בסיסי בלבד בענן על תכונותיו ויכולותיו. מנהל מערכות המידע או הדבאופס של הארגון נכנס לפורטל של חברת הענן, מזין פרטים ואמצעי תשלום ומתחיל להקים סביבה ולבנות את מערך תשתית הענן על פי הידע הקיים אצלו, אשר לעיתים לוקה בחסר

journey

המוטיבציה מונעת כמעט תמיד מתוך הרצון הלגיטימי לחסוך כספים לארגון ומתוך כך הבחירה לוותר על מעורבות של שותף ולעבוד ישירות מול חברת הענן. ברוב המקרים, צורת עבודה זו אינה תואמת את שיטות העבודה המומלצות המוגדרות ע”י חברת הענן, מה שהופך את העבודה לפחות יעילה, פחות בטוחה ויותר יקרה

עבודה עם שותף (ספק שירותי ענן מומחה), בעל ניסיון והבנה בעולמות הענן תייצר ערך אדיר ללקוחות אלו. שותפים רבים מציעים שירותי ענן, חלקם עובדים עם ענן אחד ספיציפי ובונים התמחויות בו, אחרים עובדים עם יותר מחברת ענן אחת ומציעים גמישות בבחירת סביבת הענן

שותפים מסוימים מגיעים מעולמות מערכות המידע עם התמחות בתשתיות מקומיות ואדפטציה לתשתיות בענן, אחרים מגיעים מעולמות הרישוי ומספקים שירותי ענן לאחר שגייסו את המומחים הרלוונטיים. ספקי שירותי ענן מסוימים מציעים את מכירת שירות חברת הענן בלבד, אחרים מספקים גם שירותי מומחה לסיוע ללקוח בבניית התשתיות. מעבר לכך, ישנן חברות כמו קודווליו אשר מתמחות באפליקציות ובטרנספורמציה דיגיטלית ומספקות שירותי ענן גם ברמת הקוד ולא רק ברמת התשתיות בענן ובכך מסייעות ללקוחות לבצע מודרניזציה גם לתשתיות וגם לקוד

כל לקוח יכול וצריך למצוא את השותף שמתאים לצרכים שלו

המעבר לענן נושא בחובו יתרונות רבים ובכדי לנצלם בצורה המיטבית יש צורך להכיר אותם. ספקי שירותי ענן תמיד יכירו טוב יותר את היכולות, הן ברמת השירותים והן ברמת התמחור. לרוב הם גם יהיו הראשונים לדעת על כל פיצ’ר חדש לאור העובדה שהם מקבלים עדכונים שוטפים מחברות הענן, ויודעים גם מה מפת הדרכים לשינויים שעשויים להתווסף בתקופה הקרובה

עבודה עם שותף (ספק שירותי ענן מומחה), תייעל את העבודה בענן לחברות המעוניינות להגר מהסביבה הפרטית. בניית התשתית תהיה לפי ההגדרות המומלצות ע”י חברת הענן אל מול הדרישות של כל בעלי העניין בארגון. לכל אחד דרישות שונות, צרכים שונים, מגבלות ותקנים אשר הלקוח מחויב לעבוד לפיהם. בבניית תשתית הענן יש צורך לקחת בחשבון את כל אלו והם גם אמורים להמשיך ולהיות הקווים המנחים בבניית הארכיטקטורה של התשתית והסביבה

בנוסף תכנון נכון של תשתיות יכול לנצל את היתרונות של הענן באופן משמעותי. לדוגמה, בבחירת סוגים שונים של מערכות אחסון לצרכים שונים. מידע “חם” עם צורך בגישה רציפה ותכופה יאוחסן בדיסקים מהירים ומידע שלא דורש גישה בתדירות גבוהה ניתן לאחסן בשירותי אחסון המיועדים לכן (וישנן כמה רמות כאלו), שמחירם נמוך משמעותית. כמו כן, כלי האוטומציה והאוטו-סקיילינג אותם מציע הענן, מספקים יכולת לרכוש שרתים מותאמים יותר אשר יספקו מענה ראוי לעבודה השוטפת אך ברגע של עומס חריג או צורך זמני אחר, יעלו באופן אוטומטי שרתים נוספים בכדי לתמוך מיידית. פרקטיקה זו מבטיחה עבודה רציפה, אך התשלום עבור השרתים שעלו וירדו אוטומטית לפי הצורך, יהיה רק עבור זמן הריצה שלהם בפועל

“שירותים” במקום “שרתים”

אחד היתרונות הבולטים במעבר לענן הינו השירותים המנוהלים. היכולת לצרוך “שירותים” במקום “שרתים” מייעלת באופן משמעותי את העבודה השוטפת, תפעולית ופינסית. לכל חברות הענן יש מגוון שירותים מנוהלים לתחומים רבים כגון

IoT, Data, Storage, Developer tools, Containers, AI+Machine learning, Security, Compute etc..

היכולת של מומחה ענן מטעם השותף להמליץ על היתרונות בשימוש בשרתים מנוהלים, לאחר שחקר והבין את המשמעויות של החלופות אל מול הצורך, היא המפתח להשיג יעילות מקסימאלית עבור האפליקציות וסביבת העבודה בענן

ישנן דוגמאות רבות לערך שמייצר שותף מומחה שמלווה חברות במסע על הענן. שותף זה נשאר תמיד ברקע לייעץ לכל צורך שעולה, אבל בעיקר לבצע אחת לתקופה אופטימיזציה של עלויות. בביצוע תדיר של אופטימיזציה, הלקוח יכול לחסוך עשרות אחוזים בתשלום החודשי השוטף עבור שירותי הענן. הכרות השותף המומחה עם מחירוני חברת הענן והעובדה שהוא מהראשונים שמתעדכנים על כל שינוי בתוכניות התמחור ומשקף זאת ללקוח, מייצר ערך כלכלי מוחשי ומיידי

לשותף מומחה יש את היכולת להתאים ללקוח תוכנית תמחור המתאימה לצרכיו. חברות ללא שותף לרוב לא יוכלו להשיג את אותן התוצאות. מעבר לכך לקוחות אשר עובדים ישירות מול חברת הענן לרוב ישלמו את התעריך הקבוע שהוא הגבוה ביותר לכל שירות, שוטפים יכולים לתת ללקוחות הנחה מהמחירים הללו שתקוזז מהרווחיות שלהם בעסקה. כמו כן, הם גם יכולים להציע ללקוח את תמיכת חברת הענן ללא עלות כחלק מהשירות, את אותה התמיכה על הלקוח לרכוש במידה והוא עובד ישירות מול חברת הענן

אז לפני כל החלטה לצאת למסע אל הענן, בין אם מדובר על חברה גדולה עם מערך מחשוב מפותח או בסטרטאפ בתחילת דרכו, מומלצץ להתייעץ ולבחור את השותף המומחה הנכון

Image

Hanan Zakai|3 years

Surviving the talents’ recruitment challenge

Rapyd’s recruitment billboards that popped out recently on prime locations are another remarkable reminder for the mind-blowing resources invested in the efforts for recruiting top development talent.

Are HR/Talent acquisition/Recruiters/Head hunters being “doomed” to this Sisyphus mission…probably yes at least for the coming year or so. However, is there any way to ease the burden of the rock? Decrease the slope? Survive this never-ending “battle”? Maybe there is.

We should start with challenging some of the paradigms regarding talent recruitment.

 woman-pushing-world-rock
Sisyphus mission

We have yet to see this level of challenge. Not so sure? I recall my first year the high-tech industry back on 2000 before he first bubble burst. It was at the center parking area of Herzlyia Pituach at Maskit st. the cars’ windshields were covered by a paper advertisement “If you’re a software developer with three years of experience come work with us for 30,000 ILS a month”, you can draw the fine line from the windshields to the billboards, the challenge of getting top talent is here for a few decades and counting.

We will be able to recruit and preserve our professionals for many years to come. Apparently not. The paradigm that a developer will stay for 5-6 years doesn’t carry its weight, it’s an outcome of endless opportunities, anthropological evolutions (generation Z, millennials…) that narrowed the average life span of a developer’s position to somewhere between 2-3 years, in certain line of expertise like Devops even less. Recent surveys reflected that the share of developers that left their jobs doubled itself from 2008 up to 15% on 2017 and estimated to excel the 20’s at the start of 2021.

However, there are new factors: Covid-19 boost to Digital Transformation, propelled the already “smoking” Israeli high-tech ecosystem which is being reflected by an outstanding number of around 50 Unicorns, 400% increase from 2019. More jobs, more money, many more deadlines. On the other hand, remote & hybrid working models enlarged the developers’ potential working radius and enabled recruitment of peripheral habitants to companies in mid-town Tel-Aviv and co.

Bottom line, I don’t envy my talent acquisitions friends.

Dan Heath in his book ” Upstream, the quest to solve problems before they happen” addresses solutions for this type of challenges where one should exercise an “Upstream mindset” that will enable him to proactively diminish a problem and not just continuously react to its outcomes.

It’s a downstream state of mind to hectically source potential employees and offer them the moon and stars and then start looking for their replacements one or two years later, whereas it’s an Upstream activity to shift our resources to the roots of this situation. In order to analyze it, we should focus on one of or the most crucial factor, no company wants to “grow” developers, everybody wants and experienced developer that will provide immediate impact on their code source. After all a major investment usually reflects grave stakeholders’ pressure and tight release plans. Upstream thinking will point us to bridging the preliminary and biggest gap of migrating an entry level developer to an efficient productive, somewhat experienced contributor. Implementing the right planning and resources we can create a quicker “Time to Productivity”.

We exercised this way of thinking in CodeValue’s bootcamps. 10-12 entry level developers, fresh academic graduates, that we turned into smooth “coding machines”. The process starts with Bootcamp’s screening project which is different than the regular ones since we’re not looking for efficient/clean code but for top university alumni with sharp minds, positive attitude and extreme self-learning skills. To the few that passed the process we provided targeted, multi exercised training by our elite architects. After conclusion of this phase, we allocated them to projects in which a Senior Codevalue developer led and mentored them. These bootcamps enable us to cut “time to productivity” and provide quality code contributors to our clients. 

Throughout this time, we continued with most of our regular recruiting efforts, since the other part of the equation is keeping the fragile equilibrium between Bootcamp graduates and senior developers.

The cynical person will ask, but just four paragraphs above you wrote that other companies will target these developers, that’s correct however with lower kick off salaries and the quicker time to productivity eventually increase the net cost of contribution of each developer and with the right contracting, engagement and pin pointed employee preservation we can pick the “fights” for the developers that we best fit our DNA and standards.

And in a wider high-tech ecosystem perspective addition of dozens and hundreds of capable developers may somewhat flatten the unbalanced supply and demand curves and hopefully return some sanity to the Israeli high-tech scene and a few hours of relief to the recruiting personnel.

Do you want to scale-up your development team, click for more information on CodeValue’s Dedicated Bootcamps

Published by Hanan Zakai

Technology Division Manager @CodeValue

Image

Leehee Gerti, Director of Marketing|3 years

.NET Conf Israel 2020

Thank you for joining us at the CodeValue sponsored local Israel event following the global .NET Conf 2020.

.NET 5!

You heard it right. Released on 10/11/2020, .NET 5 is the next version of .NET. As the successor of .NET Core 3.1, this milestone release signifies a significant release in the journey to the .NET platform unification between .NET Core, Xamarin, and Mono. Along with exciting features coming in C# 9, these are thrilling times in the .NET space.

.NET Conf is an annual online event showcasing many of these advancements and capabilities. Following the global event, on Dec 2020, CodeValue hosted the local Israeli event, in Hebrew, where attendees were able to ask questions and get them answered. CodeValue experts highlighted the critical news and exciting stuff that .NET has to offer this year. See all 5 sessions from the event bellow and learn about the new release!


What’s New in C# 9

Moaid Hathot, Senior Architect and Consultant @ CodeValue, Azure MVP


Porting Projects to .NET 5

Nir Dobovizki – Senior Architect and Consultant @ CodeValue


C# Source Generators

Alon Fliess– Chief Architect @CodeValue, Azure MVP, Microsoft Regional Director


Blazor in .NET 5

Alex Pshul – Software Architect and Consultant @ CodeValue


Developing and Deploying Microservices with “Tye”

Eran Stiller– Chief Technology Officer @CodeValue, Azure MVP, Microsoft Regional Director


Panel – Q&A

Alon Fliess, Eran Stiller, Moaid Hathot, Alex Pshul, Nir Doboviski

Want to stay up to date? Follow us on Social Media
Image

Leehee Gerti, Director of Marketing|3 years

Planning for Microservices

Recently, we hosted a half-day online event where our experts shared their understanding of what Microservices are all about.

Sometimes it feels like everybody is creating Microservices Architectures. Everyone’s building a new system with Microservices, decomposing old monoliths, and generally giving us the feeling that Microservices is the only way to go. But is it the only option? What should we consider when approaching Microservices? When should or shouldn’t we use Microservices? And if we do decide to take the approach, how should we handle Microservices?

In this half-day online event Alon, Eran & Tomer shared their understanding of what Microservices are all about, when we should use them, what we should avoid, and how to implement them correctly. If you’re a novice to Microservices, or even if you’ve already heard quite a bit about them, you’ll find these talks beneficial. This workshop was intended for decision-makers, software architects, DevOps architects, senior developers, and senior DevOps engineers.


To Microservice or Not to Microservice? How?

Alon Fliess, Chief Architect @CodeValue

Do more with less, the pain of the modern architect. High cohesion & low coupling, high availability & scale, ease of DevOps. Our systems need to support all these quality attributes, while providing more functionality with less resources. We need to be agile, we need to embrace changes, we need to have a better way! Micro-Service-Architecture (MSA) promises to bring cure to the architect’s pains, but does it really deliver?

This lecture presents the essence of MSA, how does it answer the main concerns of modern distributed systems, how to get started, how to migrate current solutions to MSA by adopting an evolution migration path. What to be careful about and the signs that we are on the right track. We will talk about SA evolution, the CAP theorem and eventual consistency, MSA principles, hosting. containers, versioning, orchestrators & decupling business processes. By the end of this lecture, the participant will have a better understanding of why, when, and how to embrace MSA.


6 Lessons I Learned on My Journey from Monolith to Microservices

Eran Stiller, CTO @CodeValue

For the past couple of years, Microservices is all the rage. We want to use Microservices, we want to decompose into Microservices, and we want Microservices to be a part of our world. While modern tools and platforms such as Docker, Kubernetes, Service Mesh, and the public cloud help in implementing and maintaining such systems, the reality is that many fail even before the first line of code was written.

This can happen for many reasons; Perhaps you chose a Microservices architecture for the wrong reasons? Maybe the organization wasn’t ready for it? Or just possibly – perhaps the proposed architecture didn’t embrace the true meaning of Microservices?

As the CTO of CodeValue, I get to tackle these questions a lot. Join me in this session as I provide my perspective on transitioning from Monolith to Microservices through lessons learned in the real world while architecting and implementing multiple Microservices based software systems at various customers.


A Recipe for Pickled Microservices

Tomer Shamam, Senior Software Architect @CodeValue

Microservices are actually small and self deployed apps which can be distributed and scaled. The best recipe to “pickle” micro-services and harness their true power, is to isolate them from others, putting them inside a container. A container is a standard unit of deployment that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.

In this session, we will discuss app containers in general and we will learn how easy and beneficial is to containerize micro-services using Docker, the lead app container solution in the market.


Panel – Microservices Open Q&A

Alon Fliess, Eran Stiller & Tomer Shamam discuss and answer Microservices-related questions raised from the audience.


Want to stay up to date? Follow us on Social Media
Image

Vitali Zaidman|4 years

What developers need to know about SEO

In this webinar we defined the developer’s part in SEO and talked about key topics in SEO from the developers’ perspective.

SEO means Search Engine Optimization or a set of guidelines that are supposed to make your site appear higher than others on Google search results.

Developers tend to know very little about SEO because usually we tend to focus on how the website is exposed towards users and not bots, but it’s a very beneficial thing to know. SEO is very important for most of the modern websites because higher rank = more traffic = more money.

In this webinar I define the developer’s part in SEO and talk about key topics in SEO from the developers’ perspective.

What developers need to know about SEO

For more information check out my site:

https://vzaidman.com

Web Architect and Blogger @Welldone Software

Image

Eran Stiller, CTO & Co-Founder @CodeValue|4 years

12 lessons from running a full day online conference

Introduction

The coronavirus pandemic has been with us for the past few months. Software conferences all over the world are either canceled, postponed, or moved online. I had three major conferences where I was supposed to speak, which got canceled.

In addition to speaking at various conferences, I organized one major conference for the past couple of years — “Architecture Next.” It ran the risk of cancellation as well. However, together with my colleagues, we decided to run the conference nonetheless, and move it from a physical setting to a virtual one.

Unlike organizing physical conferences and meetups, this was the first time I hosted an online conference. I knew we were sailing into uncharted territory for us. Nevertheless, in hindsight, we had a very successful conference with more than 1,100 registrants and more than 500 attendees and minimal technical glitches.

In my opinion, the most critical aspect of running this conference as smoothly as possible, was the meticulous planning that the team performed beforehand. That’s why I decided to write this article. Its purpose is to share with you what worked and what didn’t work at our conference to make your online conference experience go as well as possible. Online conferences are becoming the norm these days, and I don’t see them going anywhere anytime soon. So let’s get to it.

The Content

markus-spiske-KP1bubr2j4A-unsplash-1024x683
Photo by Markus Spiske on Unsplash

“Content is king.”

This saying was true at the time of in-person conferences, and it’s just as accurate today. Any conference should have a vision and a target audience, and it should strive best to serve that audience via the selection of content.

At “Architecture Next,” our target audience was software architects and technical decision-makers such as tech leads, executives, etc. Our agenda is to focus on new and upcoming technologies and methods, and that was what we focused on during our Call for Speakers phase.

Without good content, it doesn’t matter how technically competent your conference is. Focus on your audience, get great speakers, and everything else should follow.

Conference Structure

robby-mccullough-csHCIiYXeVY-unsplash-1024x683
Photo by Robby McCullough on Unsplash

When building the schedule, we faced the following question: should we split the conference to several consecutive days and hold each track separately? Or should we keep a single-day conference format with multiple parallel tracks?

Each of these approaches has its pros and cons. When you have a physical venue, each additional day can have high costs since you need to pay for the site, the food, etc. On the other hand, you can allow attendees to join more content as there are more available time slots. When attendees are arriving from afar, this makes a lot of sense. The additional per-day costs are suddenly gone when going virtual, so at first, it seems like having a multi-day conference is the better choice.

However, running a single-day conference with multiple tracks allows us to focus all the marketing effort on a single day. This way, you can get a lot of hype from the overwhelming amount of attendees joining at once. Also, since the conference is online, recording the sessions and making them available for your attendees later is much easier than doing it in a traditional session. Besides, our attendees could easily switch between concurrent tracks and get more value out of their time than if the conference was spread over multiple days.

Indeed, running multiple tracks concurrently on a single day is more technically challenging as it requires having more online moderators. However, I believe that the benefits far outweigh the challenges, and indeed our users seamlessly switched between channels the find the content that was best suited for them at any given timeslot.

Session Length

age-barros-rBPOfVqROzY-unsplash-1-1024x683
Photo by Agê Barros on Unsplash

In the past, most traditional conferences set the standard session length somewhere in the range of 45–75 minutes per session, with a 45 to 60-minute session probably being the most common format. At first, online conferences followed the same pattern.

Then, at Microsoft Build, Microsoft’s annual developer conference, which was virtual this year, tried something new. Each session was only 30 minutes long. Fifteen minutes of the time was devoted to a presentation, while the remaining time was comprised of Q&A.

At first, I was quite skeptical of this format since there is a limit to the amount of technical content one can deliver in 15 minutes, but it worked! Sessions were kept short and focused, and it was a breeze watching so many sessions during a single day. In contrast, having an attendee focus on a presentation for 60–75 minutes in front of a screen is very hard, and many attendees will probably not make it to the end.

So what did we do for our schedule? We settled on the middle ground. Each session was 45 minutes long, followed by a 15-minute break. However, presenters were asked to limit their presentation to 35 minutes and allow ten more minutes for Q&A. For most sessions, it worked well. The audience didn’t leave before a session was complete, and the Q&A section was highly successful. In my opinion, with some sessions, the audience was more engaged than in regular in-person events. Perhaps the fear of publicly asking a question is slightly removed in an online setting.

My key highlight here is to keep the sessions focused (no more than 35–40 minutes), not including Q&A, and always leave time for the audience to engage. This engagement is the main benefit that the audience has over watching the recorded session later on YouTube.

Live vs. Recorded Content

graydon-driver-ggZiK8G2WLY-unsplash-1-1024x683
Photo by Graydon Driver on Unsplash

The above point about audience engagement leads me to the next topic. One of the primary debates I see around online conferences is whether sessions should be recorded or streamed live. In my opinion, live streaming the session is a far superior choice. I reason that if the entire session is recorded, then the audience can just as well catch it on YouTube later or on some other video-on-demand platform that you offer. There is no motivation to join the session “live.”

When you live-stream the session, the audience feels more connected to the presenter even if members of it don’t directly engage using Q&A or another mechanism. The mere fact that the presenter is currently devoting his or her time to the audience has a very positive effect on all attendees. It’s true that live sessions are more complicated and are more prone to technical errors; however, that’s part of the magic. As long as technical issues are kept at a minimum and the day progresses well, having live sessions is a huge win.

Even while live streaming, presenters can include recorded clips within their sessions. This is especially true for complex demos that have a high chance of going wrong when presented live. Since the sessions should be kept short anyway, recording complex demos can significantly reduce risk, while still maintaining the “live” aspect of the rest of the session.

If you still prefer playing recorded content, always leave a live intro into the content and a live Q&A with the presenter at the end. In my view, this is the minimum one can do to thank the audience for attending. However, live streaming, in my opinion, is the best.

The Platform

ivan-nieto-tvqUpWqjAU4-unsplash-1-1024x768
Photo by Ivan Nieto on Unsplash

After finalizing your strategy, the next step is to choose a broadcast platform. Here, you’ll have several options.

Video conference software

You can use a conference call platform that allows all attendees to chat and show their audio and video, much like a regular online call. Examples include Zoom and Microsoft Teams. This method is best suited for relatively small conferences where you want to get the community vibe.

The most significant advantage is that the audience feels more engaged than the other methods, and it has the most similar effect to an in-person gathering where everyone can see and be seen. However, this approach’s drawback is that you, as the organizer, have no control over who can unmute themselves or share a video feed, and you never know what they will present.

At an in-person event, there is only a small chance that an attendee will disrupt your event. Yet, at an online event, under cover of anonymity, the likelihood of such interruptions significantly increase, especially when a large crowd is present.

Since you want to keep your attendees safe from phenomena such as Zoombombing, using this type of interaction for an online conference can be challenging. But, this is an excellent tool for a smaller community meetup and the best replacement for in-person attendance.

Online webinar software

The next option is to use software meant for online webinars. These solutions can typically handle a more significant number of participants and provide the organizer with various options to control attendee permissions, thus eliminating the possibility of harassment by attendees. Microsoft Teams Live Events and Zoom Webinar are two of the available options.

The downside to this solution is that typically the interaction becomes a one-to-many interaction instead of many-to-many. In this type of interaction, the audience members cannot communicate freely with each other and instead can only communicate with the host.

As a result, each attendee might feel “alone” in the app instead of having a community or large gathering vibe to it. If you choose this path, I highly recommend augmenting it with a solution for audience networking (see the “Audience Engagement” section below).

For “Architecture Next,” we chose to use GoToWebinar as our streaming platform due to others’ recommendations. In hindsight, we were pleased with the platform as it was relatively easy to operate, had only minor technical glitches during the day, and provided an overall good streaming experience.

Live streaming platform

Another option is to use a public live streaming service such as YouTube, Facebook, or Twitch. Using this option eliminates much of the friction of joining a session, and some platforms, such as Twitch, allow a high degree of communication between attendees. However, using these platforms has the cost of reducing the connection that you, as the organizer, have with your attendees. For some conferences, it might be a valuable solution, though.

No Registration Cap

ludovic-charlet-CGWK6k2RduY-unsplash-1-1024x576
Photo by Ludovic Charlet on Unsplash

When running an in-person event, you have a physical limit to the number of attendees — the session rooms’ sizes. However, one of the advantages of going virtual is that this limitation is no longer a barrier. You can accommodate any size of gathering! However, do note that there is a caveat here — the license of your streaming platform might impose some limits.

Often enough, to host more attendees, you need a more expensive license. If possible, defer the decision on which license to purchase as close to the event as possible, when you have more information regarding the expected attendance. For example, at “Architecture Next,” we waited until a week before choosing and buying the exact license we needed.

Don’t forget, though, that the number of attendees is much lower than the number of registrants free to attend events. So take it into account and plan accordingly. In my experience, typical attendance rates fluctuate between 20% and 50%, depending on geography and local culture.

Hassle-Free Session Switching

glenn-carstens-peters-EOQhsfFBhRk-unsplash-1-1024x681
Photo by Glenn Carstens-Peters on Unsplash

One of the critical aspects of running a successful online conference is reducing attendee dropout during the day. Several conferences that I participated in required their users to follow a different link for each session. That’s quite cumbersome. Think about it — if you just got your users to leave the room, they have one more barrier to rejoin, so why would they bother? Imagine that you would force your users to leave the room after each session in an in-person conference, even if their next session is right there. It makes little sense.

The solution we used was to run each track in its own continuous channel. This way, the three tracks acted as television channels, where users could flip between them. As long as users wanted to stay in a particular track (or room), they didn’t need to do anything. As a result, staying within the conference turned into a more comfortable choice to perform, thus reducing dropout.

Moderate and Host Sessions

sam-mcghee-KieCLNzKoBo-unsplash-1-1024x576
Photo by Sam McGhee on Unsplash

It’s hard for presenters to follow the stream of questions from the audience as they come in. At “Architecture Next,” we had a host/moderator for each session. His/her job was to introduce the speaker, monitor the audience questions during the session (while answering some in the chat window if possible), and finally ask the most important questions at the end in an interview-like manner.

This audio conversation between the two captivates the crowd and makes them stay longer, as interviews are more engaging than a single person talking.

Having this format allows the audience to feel that they are part of the session, that their voices will be heard at the end, and gives them something to look forward to.

You should prefer that these moderators won’t have too much responsibility on their hands at the same time other than to moderate. For example, I moderated one of the tracks and spoke at other tracks and had overall responsibility for the conference, and it was very challenging.

Preparing Questions

emily-morter-8xAA0f9yQnE-unsplash-1-1024x683
Photo by Emily Morter on Unsplash

There are only so many things that are more awkward than leaving ten minutes for Q&A and not receiving a single question from the audience. If a session goes well and you have many attendees, you probably won’t have a problem. However, if you have fewer attendees or the session was less successful, awkward silence can occur.

My recommendation is to ask every presenter to send 3–5 questions about their session beforehand. These questions can then be used as a backup by the host if there is not enough audience participation. Starting with one “pre-made” question can even trigger additional “real” follow-up questions from the audience, and from there, you can continue as usual. Don’t give up on this. You’ll thank me later.

Dry Runs

jordan-sanchez-Vbzx-yy5FoA-unsplash-1-1024x684
Photo by Jordan Sanchez on Unsplash

Content dry runs

Practice makes perfect. If possible, have at least one dry run with each of your presenters. Even presenters that are accustomed to large in-person events may struggle with the online format. Feedback is always welcomed. These dry runs will be noticeable on the day of the event as the presentations themselves become more coherent and are timed to fit the allotted slot. All the presenters I had the honor of performing their dry runs had undoubtedly improved their performance.

Of course, this is not always possible with all speakers and events but is highly recommended, even for experienced speakers.

Technical dry runs

Unless your speakers and moderators are well versed with the streaming platform you chose, having multiple dry runs where speakers can try it and get familiar with all the controls is crucial for avoiding technical glitches on the day of the event. If possible, have speakers connect to the platform and use the same hardware they’ll use for the event to reduce risk even further.

Audio-Video Equipment

obi-onyeador-9FwrfeM2XIY-unsplash-1-1024x768
Photo by Obi Onyeador on Unsplash

Never underestimate the importance of proper audio and video quality. Having a lousy stream reflects poorly on your conference, and might cause attendance to drop. Another culprit can be insufficient upload bandwidth from your presenters’ locations.

Check with your presenters regarding the type of equipment that they have and test it if required. You don’t need to buy radio quality audio equipment and invest a ton of money. Nevertheless, you should also avoid the built-in microphone on your laptop or webcam. A decent modern webcam and a good quality USB headset will take you a long way.

Audience Engagement

niclas-moser-6iK5U7OVZY8-unsplash-1-1024x683
Photo by Niclas Moser on Unsplash

Social networking

One area where an online conference clearly lacks against its in-person counterpart is around audience engagement and networking. At an in-person event, there are various opportunities for the audience to get to know each other, talk to speakers 1-on-1, meet new friends, etc.

While this problem has not been solved yet, we tried to use Discord for audience networking at our conference. Discord is a chat service that originated from the gaming world and is similar to Slack. We chose to use Discord as it is less formal than Slack, and that’s the experience we wanted to give to our attendees. There is even a good server template to get you started.

We invited each attendee to join our Discord server, where attendees could converse with each other and discuss various items with the speakers who were also available on the platform for chats. We created a channel for every track, and at the request of the audience during the day, we even created a dedicated channel for job seekers and job openings. Overall, the experience was a successful one though there were two unresolved issues:

1) The Discord chat was not integrated with the streaming platform and required users to have two separate windows/tabs open in parallel. This requirement is not ideal and does not put audience engagement at the forefront.

2) Many of the target audience did not have a Discord account since they were not gamers, which acted as a barrier for entry. They also needed to learn how to use the platform. Slack might have been a better match since it is likely to assume that most of our target audience is already familiar with it.

This issue is an item that we’ll have to consider for future conferences.

Presenter breakout room

For me, it looked like the ten minutes of Q&A at the end of each session was often not enough, and we had to stop without answering all the questions. At an in-person conference, attendees can simply walk up to the stage and talk to the presenter, and they can even walk to the lounge where the discussion can continue. We didn’t have an equivalent experience for this. I think that we should have some sort of breakout room where interested members of the audience can have an audio-video chat with the presenter following the session. We’ll have to put more thought into this, and any ideas and suggestions would be welcomed.

Conclusions

Eran Stiller AN 2020_1024x768
Me directing the conference

Organizing community events is a lot of work. But it’s fun and it’s essential. Knowledge sharing is one of the most important things that we can do as professionals, and conferences are significant.

I hope that this article gives you some insight into what it’s like to organize an online conference. This conference was the first time I ever hosted such an event, and I hope that my learnings can serve others. If you have any additional thoughts, I welcome them in the comments below.

Will conferences remain virtual after we win the fight against coronavirus? Will online conferences become the new norm? What will I do for my next conference after COVID-19 is behind us? I don’t know yet, but I’m pretty sure that we’ll have a mix of both styles as every type of event has its advantages. One thing I know for sure is that nothing beats speaking to a large crowd, whether online or in-person. Thank you for reading, and see you at the next event.

Published by Eran Stiler

CTO & Co-Founder @CodeValue

Image

Leehee Gerti, Director of Marketing @CodeValue|4 years

Architecture Next 2020

One full day, over 500 participants, 3 tracks, 13 lectures, and one cloud experts panel!
Wow, we had a blast.

The software development world is developing at a tremendous pace. New technologies and platforms are abundant, and things that were brand new a year ago sometimes suddenly seem like ancient history. The software architect’s job is to figure out how to best use these technologies and platforms to his/her advantage, and that job is getting harder and harder. At Architecture Next 2020 we discussed revolutionary concepts and tools and demonstrated how they can be applied towards making your next software system a better one.

Due to Coronavirus related restrictions, this year’s conference, held for the third consecutive year, was all virtual. But as usual, was packed with great content and insightful speakers.

General assembly

The conference day was launched with an introduction to CodeValue’s new CEO, Tali Shem Tov. Tali delivered a brief talk about models & technology of the new era.

The keynote session was delivered by guest speaker Magnus Mårtensson the Founder & CEO of Loftysoft, a Microsoft Azure Most Valuable Professional and a Microsoft Regional Director. Based on his extensive experience of helping numerous (small to enterprise) customers, Magnus highlighted some areas with important learnings and common challenges to target early optimization paths on the way to the cloud.

Keynote: The Cloud challenge is more than just technical – people are involved

Upon completion of the opening lecture, the day was divided into 3 different tracks:


Executive Track

The first session on this track was given by Alon Fliess, Chief Architect at CodeValue (MVP & Microsoft Regional Director). In his session, Alon elaborated on the essence of the APM systems, the good, the bad, and the vision about their future.

APM – What Is It, and Why Do I Need It?


The second session was given by Erez Pedro, co-founder and head of product & UI/UX at CodeValue. In his talk, Erez demonstrated how together we evolve the system from a technical device to a full product in a process including analysis, design with rapid prototyping.

Product Thinking 101


Nir Dobovizk, a Software Architect and a Consultant at CodeValue gave the third session on this track, in which he told us the tragic story of the microservices-based, modular, fully automatic, next-generation, totally buzzword-compliant, multi-satellite ground station that wasn’t.

In Space, No One Can Hear Microservices Scream – a Microservices Failure Case Study


To conclude the Executive track we had the pleasure to hear Alex Pshul, a software architect, consultant, speaker, and tech freak. Alex shared with us what can be learned from testing the execution of 300K messages per second in a totally serverless system.

What We Learned by Testing Execution of 300K Messages/Min in a Serverless IoT System


Cloud & Back-End Track

To kick off the Cloud & Back-end track, Michael Donkhin a Software Architect at CodeValue, talked about all things Java. He started with a retrospective of the Java platform history. Next, was a review of some of the most popular frameworks around Java. And finally, Michael concluded with a review of ongoing efforts to improve the platform further and extend its reach, like project Valhalla and GraalVM. 

Java Turns 25 – How Is It Faring and What Is Yet to Come


The second session in this track was given by CodeValue’s Co-Founder & CTO Eran Stiller. Eran is recognized as a Microsoft Most Valuable Professional (MVP) on Microsoft Azure since 2016 and as a Microsoft Regional Director (MRD) since 2018. In his talk, Eran reviewed today’s most popular API formats and their relative strengths and weaknesses. From REST, through OpenAPI, via gRPC and to the rising star of AsyncAPI. 

API Design in the Modern Era


Following Eran, Moaid Hathot a prominent Software Consultant at CodeValue. Moaid introduced Dapr and demonstrated how we can use it to build a distributed, cloud-native, microservices application using various programming languages and frameworks, that can run virtually anywhere.

Dapr: The Glue To Your Microservices


Ronen Levinson is a DevOps Engineer and consultant at CodeValue. Ronen concluded the Cloud & Back-End track with a discussion about what is OPA, He explored OPAs’ integrations with all the levels of the cloud-native stack, along with on-stage demos.

Centralized Policy Governance With OPA


Front-End Track

Amir Zuker, a Co-Founder of CodeValue and its Web and Mobile division leader, is a senior software architect specializing in .NET and Web-related toolchain and technology stack. Amir opened the track with a session covering the emergence of WebAssembly into the app world while using Blazor and C#.

Building Web Apps With WebAssembly and Blazor


The second session in this track by Vitali Zaidman, a Web Architect and Blogger from Welldone Software, demystified the different approaches and discussed the trade-offs while exploring real-world examples.

Do You Need Server Side Rendering? What Are The Alternatives?


Eyal Ellenbogen was our third speaker on that track. Eyal is a Web Developer and Architect at CodeValue. In his session, he explored the process and the decisions involved in building a UI component toolkit and how to get it right the first time.

Building a UI Foundation for Scalability


Ending this track was Vered Flis, a Senior Software Engineer at CodeValue. In her session, she tackled the big questions head-on and unravel different approaches and practices that will assist you in writing highly performant web apps as is expected today.

Because Performance Matters!


Panel – Public Cloud, Hybrid Cloud, Israeli Cloud, Microservices, PaaS, SaaS, and Everything in Between

The topping on the ice cream was the Cloud experts panel, where our own cloud experts: Alon, Eran, Amir & Hanan hosted Tomer Simon (Ph.D.) the National Technology Officer in Microsoft Israel. in the panel, the experts talked about all things Cloud and answered questions such as: How should you approach the move to the cloud? What are the risks of an on-prem requirement? Should you use PaaS & SaaS, or is IaaS king? Which cloud vendor should you use? And many more pressing issues.

We are here for you

Need Consulting or development services? we’re here for you .

Image

Yaara Man|4 years

Manna Irrigation On-Boarding

How do you scale up a successful product?

Case Study

Team: Yaara Man (UX/UI Designer), Erez Pedro (UX Consulting)
Client: Manna Irrigation
Duration: Three weeks
Tools: Pen and paper, Figma

The Client

Manna, an irrigation intelligence leader, provides growers around the world with the actionable information they need to make better-informed and more confident irrigation decisions. Its sensor-free, software-only approach leverages high-resolution, frequently refreshed satellite data and hyper-local weather information to deliver highly affordable and accessible solutions for site-specific irrigation recommendations.

The Brief

Until recently, Manna’s users were manually registered by distributors and marketing staff. Manna needed a new UX that will allow self-service on-boarding and registration process, to enable the company’s next growth phase. 

Manna’s main Irrigation Recommendations screen, redesigned by us shortly before the onboarding project

Users & Audience

The system provides farmers around the world, from rural India to the high-tech oriented Central Valley in California, with accurate irrigation recommendations that replace their traditional irrigation methods, most of which are based on historical irrigation patterns, disregarding climate changes and actual weather conditions. Users range from irrigation managers of large-scale agricultural concerns to the owners of one-man-operation farms with just a few fields.

As opposed to big clients, which will continue arriving through Manna’s marketing partners, the UX work for the on-boarding process focused mainly on smaller farm owners, who were targeted by a well funded online marketing campaign.

Manna’s Crop Monitoring screen, redesigned by us shortly before the onboarding project

Roles & Responsibilities

CodeValue’s UX/UI team worked together with Manna’s product and software teams to rethink some of the original design and to extend its capabilities with new features. The work was done remotely while maintaining a close and open online communication with Manna’s Product department and having periodical design reviews with stakeholders.

Scope and constraints

Manna offers new users a 30 days free trial during which they can receive real-time irrigation recommendations for one field, explore additional tools the system offers, and get true value right from the start, free of charge. In order to receive insights for the field, the user must set it up first, draw its geographical boundaries, and provide some agronomic data, a potentially complex and lengthy process. The main goal was reducing friction and enabling the user to set up the field as quickly as possible.

Field creation and editing features were previously available only in Manna’s Desktop app, but growers who are using only Mobile devices constitute a large and ever-growing percentage of users, which is likely to grow even more as a result of the campaign – so Manna needed a new Mobile interface for drawing and editing field boundaries.

The on-boarding project followed a recent substantial redesign to the UX of the system’s main screens and a new and improved UI. The new-look & feel was to be further developed and implemented in the new onboarding screens and in some of the existing ones.

Additional challenges were the aging software stack of the existing system, limitations posed by the non-technical customer profile, and an extremely short timeline dictated by stakeholders.

Due to the tight deadline and budget limitations of the client, no user testing was possible, but much of the working assumptions were based on previous user feedback.

Farm Overview screen, Farm map screen, and the app’s main menu, redesigned during the onboarding process to form an improved and cohesive overall experience.

The Design Process

To minimize friction, the amount of mandatory user data was cut to the possible minimum by relying on common defaulted parameters where possible, without compromising the accuracy of the final irrigation recommendations. Advanced settings are still available one click away, for more technical users who might want to fine-tune their agronomic data.

Some of the new Field Creation flow screens in the Mobile app

A minimalistic and intuitive GIS interface was designed for the definition of field boundaries, taking a Mobile-First approach. The user can easily draw polygons, rectangles, circles and squares by performing simple gestures or by using her own physical location (drawing while walking or driving from point to point).

The flow is accompanied by a friendly walkthrough with clear callouts to guide and explain each step.

Field boundaries map interface, Web & Mobile versions

Summary

Through the course of three short weeks, Manna’s Web & Mobile applications were transformed into a brand new product. The design was very well received by stakeholders and the feedback from users was extremely positive.

Season Timeline editor, a new feature requested by Manna during the work on the On-boarding project

Written by Yaara Man, Sr. Product Designer @CodeValue


Image

Ronen Rubinfeld|4 years

RemoteWork Adoption and Transformation

In my previous blog post, I’ve talked about how to prepare for continuous work and efficient conduct facing the aftermath of COVID-19. We all understand that companies need to move from survival-mode assumptions to a coherent understanding that this is the “New Normal”. In this post, we’ll briefly draft the “CodeValue way” in regard to a service focused on how to accurately prepare your business to the “new normal” by implementing RemoteWork Adoption and Transformation methods.

transformer
transformer

In the new reality businesses are operating, we see many companies that are practicing RemoteWork models. Some do so as a strategic decision, while others do so since they are forced to do so. One way or another, managers that properly adapt, embed and manage the RemoteWork method in their overall operational system gain great benefits that come with this method. Benefits such as cost-saving, access to great talent, work-life balance for their employees, increased productivity, and accelerated business results.

However, alongside these benefits, comes also great challenges. At CodeValue, we have vast experience implementing RemoteWork practices, we have been doing that for more than 10 years, and our team works and operates remotely as freely as it runs locally. With our RemoteWork service, we will support your organization’s RemoteWork Adoption and Transformation

  • We will start by analyzing the reality that your organization operates in, its current work processes, and the tools and infrastructures your teams use and define how to embed RemoteWork in your existing operation.
  • We will define how to run your Agile practices while working remotely, how to run Planning, Daily, Demo, and other relevant meetings also when operating remotely and how to use and leverage the standard tools you already use, such as JIRA or AzureDevOps.
  • We’ll ensure that your teams’ productivity increases and define how to monitor it in terms of deliverables, timelines, and costs.
  • To ensure effective communication and people management, we will also structure a holistic approach combining the managerial routines with collaboration tools such as Teams, Slack, Zoom…

Leveraging CodeValue’s technological excellence, we will also support you in adjusting your dev practices to best support remote and distributed development. We’ll help you adopt a distributed-oriented architecture (i.e. Micor-services), following modern DevOps and hosting practices (CI,CD, Containerized), and qualitative methods including PR, automation, and more.

Published by Ronen Rubinfeld

Sr. Director, Development Management & Excellence @CodeValue

Image

Leehee Gerti, Director of Marketing @CodeValue|4 years

Micro Frontends Patterns

Thank you all who attended our webinar, delivered by Amir Zuker on Micro Frontends – “extending the microservice idea to frontend development”.

So, what does it really mean? Is it just another hype? should you consider it? How should one approach it?

These are just some of the questions one might ask when presented with this notion. Long story short – it’s possible! However, it is not for everyone, and especially to the full degree.

View this session where Amir demystifies the concept of micro frontends and tackles the subject head-on.

Micro Frontends Pattern – Replay

We are here for you

Need Consulting or development services? we’re here for you .

Image

Ronen Rubinfeld|4 years

From “work from home” to RemoteWork working model

“The Day After” arrived and it is here to stay. By now, it is clear that our future will be unlike anything we have ever known. So, this is exactly the time to stop for a moment, examine where we are, think what is next, and start planning…

Plan how to prepare for continuous work and efficient conduct in this new reality. To define how we move from survival-mode assumptions to a coherent understanding that this is the “New Normal” and that we must adapt our business model to it.

Remote work
Remote work

With the COVID-19 situation, many companies find themselves forced to change the way they operate and adapt their work processes and procedures to be able to maintain the business running and achieve (even only) some level of business continuity. Many are shifting their work environment from the known and convenient office environment to work from home (WFH). While companies, who made this shift to WFH, managed to achieve some level of business continuity, they also realized that it is not enough. To be able to operate effectively in the new situation that was forced on them, they also need to adjust their work processes and their day-to-day routines and adapt them to the new reality in which they are operating. This new reality, called RemoteWork, is not a new phenomenon. There are many companies (more than 2,500 actually) that are following this model for many years now, long before the COVID-19 got into our lives.

These companies decided to adopt the RemoteWork model as a strategic decision and enjoy the benefits it brings. They strategically chose to operate in an “officeless” environment and save the costs of an expensive real estate. They decided to operate in a distributed manner and hire talent from different locations and not be restricted to the talent available in each specific location. Thus they hire the best talent, that best fits their needs and don’t fight with the competition in their geography and compromise on just a “good-enough” talent, just because this is what can be achieved in a given location. They also achieve flexible and balanced work-life balance (or work-life integration, which becomes a more common term these days), increased productivity, and accelerated business results.

But, to achieve all of that, these companies had to adopt a company-culture and work-processes that aligned with the RemoteWork principles. These processes and methodologies consider the characteristics of the RemoteWork model, the implications on the way people operate and communicate, and the challenges derived from such a model. A deep understanding of the RemotWork model and adopting the right processes will allow companies to enjoy the benefits of the model while properly dealing with its challenges. Choosing RemoteWork as a strategic decision followed by the “Do the things right” approach enables companies to accelerate their business results and achieve a unique advantage comparing the market.

Due to the COVID-19 situation, many companies seem to organically evolve into a Hybrid team, as some workers are forced to stay at home for various reasons while other workers are returning to their office workspaces. In these cases, the company needs to understand that more has changed than just a physical location of a team member (or several) and adopt a remote-friendly policy and best-practices that are neither pure remote nor pure on-site.

CodeValue offers development and consultancy service solutions for more than 10 years, utilizing our firepower of more than 220 architectures & developers.

CodeValue recently merged with Welldone Software a boutique software development company that developed a unique and extremely successful model of combining its experts working remotely or on-site. As part of the merge, CodeValue is adding “RemoteWork Analysis & Methodologies“ to our current services.

Published by Ronen Rubinfeld

Sr. Director, Development Management & Excellence @CodeValue

Image

Amir Shitrit|4 years

Implementing Health Checks in ASP.NET Core 3 & Kubernetes

What are health checks and why do we need them?

Glad you asked. When developing distributed applications, there is a multitude of reasons for your services to become unavailable. These reasons include, but are not limited to:

  • Various communication problems, such as connection or request timeouts, blocked ports, protocol versions mismatch, and other network appliances failures.
  • Resource saturation problems, such as overloaded CPU, insufficient memory or disk space, and an overloaded network interface.
  • Cascading failures due to unavailable dependencies, such as database, message queue, etc.
Photo by Hush Naidoo on Unsplash

Each of these phenomena, separately or combined, might lead to your service becoming unavailable to process up-stream requests.

While some services are of lower priority, others might be mission-critical. If those become unavailable, something needs to be done. As we all know, the first step to fixing a problem is to become aware that the problem exists in the first place, and this is where health checks come into play.

A health check, as its name implies, is a check issued by a stakeholder against your service in order to determine whether your service is healthy (or available) or not. The way this is usually done is by having your service expose an end-point (e.g. TCP, HTTP or gRPC) and the stakeholder sending a request to that end-point. If a healthy and timely response was received, your service will be considered healthy. If, on the other hand, a response hasn’t been received, or if the response reported an unhealthy status, your service will be considered unhealthy.

What’s a stakeholder? By stakeholder, I mean another service or tool that needs to know your service’s health. In modern distributed systems, such stakeholders typically include APM/monitoring tools (e.g. New Relic, AppDynamics and Prometheus) – which need to show your service’s status and alert if there’s something wrong, load-balancers – which need to know whether or not to direct traffic to your service, and last but not least, orchestration tools such as Kubernetes as explained below. In this post, I’m going to focus on ASP.NET Core and Kubernetes being that it’s the most popular container orchestrator out there.

Kubernetes health probes

When defining a pod in Kubernetes, it is possible to also specify three probes (the Kubernetes term for health-check) for your service; a liveness probe, a readiness probe and a startup probe, which I’ll ignore for now.

A liveness probe is a check that Kubernetes uses in order to determine whether a pod is alive/available. If it’s not, depending on the pod’s restart policy, Kubernetes may decide to restart it.

A readiness probe, on the other hand, is a check that Kubernetes uses in order to determine whether your service is ready to accept traffic.

Both liveness and readiness probes can be specified in different methods, of which HTTP calls and command-line tools are the most common.

Here’s an example of a livenes probe implemented as an HTTP request:

apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness-http
spec:
  containers:
  - name: liveness
    image: k8s.gcr.io/liveness
    args:
    - /server
    livenessProbe:
      httpGet:
        path: /healthz
        port: 8080
        httpHeaders:
        - name: Custom-Header
          value: Awesome
      initialDelaySeconds: 3
      periodSeconds: 3
      timeoutSeconds: 2
      failureThreshold: 5

In this example, Kubernetes will be checking the liveness of the pod by issuing an HTTP GET request to the /healthz path on port 8080 with a custom header and waiting for at most 2 seconds before declaring the check as failed. After 5 failures, the pod will be considered unhealthy. Kubernetes will perform this check every 3 seconds with an initial delay of 3 seconds to account for cold startups, although Startup probes can also be used for that purpose.

HTTP API health checks in ASP.NET Core

In ASP.NET Core HTTP APIs, we would use the built-in support for defining and exposing health check endpoints as can be seen in the following example:

using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Diagnostics.HealthChecks;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;

namespace HttpApiWithHealthChecks
{
    public class Startup
    {
        private const string Liveness = "Liveness";
        private const string Readiness = "Readiness";

        public Startup(IConfiguration configuration)
        {
            Configuration = configuration;
        }

        public IConfiguration Configuration { get; }

        public void ConfigureServices(IServiceCollection services)
        {
            services.AddControllers();

            string dbConnectionString = Configuration.GetConnectionString("OperationalDB");
            string redisConnectionString = Configuration.GetConnectionString("Cache");

            services.AddHealthChecks()
                .AddSqlServer(dbConnectionString, tags: new[] { Liveness, Readiness })
                .AddRedis(redisConnectionString, tags: new[] { Readiness });
        }


        public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
        {
            app.UseRouting();

            app.UseEndpoints(endpoints =>
            {
                endpoints.MapHealthChecks("/liveness", new HealthCheckOptions
                {
                    Predicate = check => check.Tags.Contains(Liveness)
                });

                endpoints.MapHealthChecks("/readiness", new HealthCheckOptions
                {
                    Predicate = check => check.Tags.Contains(Readiness)
                });

                endpoints.MapControllers();
            });
        }
    }
}

The important parts to note in this example are the definition of the health checks and the installation of the health checks within the request processing pipeline.

The health checks definition part looks like this:

services.AddHealthChecks()
    .AddSqlServer(dbConnectionString, tags: new[] { Liveness, Readiness })
    .AddRedis(redisConnectionString, tags: new[] { Readiness });

Here we have to check types: one for making sure our operational DB is reachable and a second one for ensuring our cache server is reachable. While the first check is relevant to both the readiness and liveness probes, the second one is relevant only to the readiness probe.

The health checks usage part looks like this:

endpoints.MapHealthChecks("/liveness", new HealthCheckOptions
{
    Predicate = check => check.Tags.Contains(Liveness)
});

endpoints.MapHealthChecks("/readiness", new HealthCheckOptions
{
    Predicate = check => check.Tags.Contains(Readiness)
});

Here we expose two HTTP health check endpoints: one for liveness, using all checks tagged with the “Liveness” tag, and one for readiness, using all checks tagged with the “Readiness” tag. These are the endpoints specified in our pod’s .yaml file as demonstrated above.

Note how this service exposes the health check endpoints over the same port as the regular API. This is important because if the API port is blocked by a firewall, this will affect the health checks as well and that’s exactly what we want.

Summary

Health checks are an important pattern to employ when developing distributed applications. Among other systems, Kubernetes makes special use of them when starting containers and directing traffic to them.

ASP.NET Core offers a comprehensive model for defining and using health checks in HTTP Web APIs. Using existing NuGet libraries, such as those found in the AspNetCore.Diagnostics.HealthChecks repository, we can easily express our service’s health as the aggregated health of its various dependencies.

In a followup post, we’ll see how to incorporate health checks in gRPC services in ASP.NET Core. Stay tuned.

Image

Leehee Gerti, Director of Marketing @CodeValue|4 years

Advanced React – Virtual DOM and Performance

Thank you all who attended our first (of many to come) webinar, delivered by Vitali Zaidman on Advanced React – Virtual DOM and Performance

The virtual DOM is what we generate through the renders of React components. But React is designed in a way where we don’t care about what happens as a result of these renders. This lecture aims to explain the relationships between different React elements- a concept that would help us to reveal React patterns where performance might be an issue. We would later show how to avoid running into these performance issues.

Advanced React – Virtual DOM and Performance – Presentation

Advanced React – Virtual DOM and Performance – Replay

We are here for you

Need Consulting or development services? we’re here for you .

Image

Vitali Zaidman|4 years

Forced to Work from Home? Maybe it’s a Blessing in Disguise?

When working remotely, you benefit much more from good working processes and proactive team members and you are punished much more severely for sloppy work processes and personal carelessness

When I heard about the possibility of working from home some 8 years ago at Welldone Software (recently merged with CodeValue), I had no idea what that meant. It even sounded pretty weird to me. How could my boss help me, and guide me? How will he know what I was working on? How do we coordinate and plan what works? And what about working in large teams?

Fortunately, I decided to go for it, and today the answers to these questions are clear to me, as is the power this format has. Having worked in “open space” for about two years recently as well, I also have the knowledge to compare this experience with the “regular” working method. Today, when many are forced to work from home due to the Covid19 virus, I have decided to gather my co-workers and my own thoughts on working remotely and prepare a set of tips to improve your productivity and work experience.

work from home
work from home

It is always worth remembering – when working remotely, you benefit much more from good working processes and proactive team members and you are punished much more severely for sloppy work processes and personal carelessness. Most of the tips can also be of great benefit to teams working in an office environment, but when working remotely they are doubly important.

Even before we move to the tips, congratulations! You’ve already saved more than an hour of commuting per each of your teammates. You’ve also saved on your office maintenance costs. In the next step, consider saving on real estate as well. Most people nowadays understand that “the future of working is working remotely” and we see how more and more businesses and employees are finding that working from home is more fruitful for them.

Proactivity

If you are reading this article, congratulations! You are already on the right path! Being proactive means always thinking about work processes and initiating changes to improve any situation. Usually, team leaders are the ones who initiate changes. They also encourage team members to initiate or at least think about problems and raise them in front of the team for shared thinking. At our company, everyone is expected to be proactive and incentivize initiatives that will improve work processes, just as there are incentives for embracing new technologies on the technical side.

Agenda

It is important to keep a strict agenda. Getting up, washing your face and brushing your teeth, getting dressed, eating, etc… We advise small things that will make you feel that you are not at liberty but at work. For example, change clothes from pajamas and not work out of bed.

It is recommended to have a daily video and audio meetings between all team members in order to plan the day ahead. Such a meeting will help keep everyone’s agenda organized and synchronized. At the meeting, everyone should briefly update what they were working on the day before, what they plan to work on the day ahead and whether something or someone can block/delay it.

It is important that everyone comes to this meeting ready after individual conversations between the various team members and after they made sure that their camera and microphone are functioning well and their task boards are up to date.

Don’t forget to drink during the day, have lunch, and occasionally get up, walk around the room and look out the window for a long distance for a few seconds (to keep your eyes healthy). At times, working from home might be intense, and these things can sometimes be forgotten which is why some even use an alarm clock to remind them to stop and refresh.

Workspace

Those who work from home have the privilege of working from “their own office”. Potentially, the office is quiet, with a beautiful view out the window, with three screens, cozy lighting and pictures and posters that make you feel good. But the “office” can also be a sofa in the living room, around throwaway clothes and in front of a TV.

Beyond the obvious equipment, one must invest in to increase productivity- computer screens, a mouse, a chair and a comfortable desk. Under no circumstances should remote equipment – headphones, microphone and camera be underestimated.

With working from home comes great responsibility. No one will replace your uncomfortable chair or will repair the air conditioner in your room. The computer is not on eye level which causes stiff neck and headaches? No one is coming over to see your posture and give you ergonomic advice. In remote work, it is important to look around and take the initiative in creating as pleasant and comfortable a workspace as possible.

Communication

When working remotely, it is important to maintain even more communication than usual. You should be using your phone frequently and not be embarrassed to call the relevant person or bring up a topic for discussion in the team chat. If in doubt, even only the smallest – no doubt – call and close corners right on the spot. Embarrassment on communicating with other team members is perhaps the most common mistake in working remotely.

As of that, it is the responsibility of each team member to be available for “a small consultation”, “a video question”, clarifications, messages in shared chats, emails, etc. It requires making sure that the staff chat alerts pop up over the phone and that it is not on “silent” by mistake.

Tools and Work Processes

Different projects and teams have different tools and processes, but the common denominator between them is the ability to monitor the work process, time estimates, priorities and division of labor in the team.

Small teams usually have simple tools and processes and a lot of oral conversations, but as the team grows, it is important that any individual communication between two people is reflected in the work processes and tools.

For example, let’s say you build a website and the management decided to slightly change its design. In that case:

·         The designer updates the site in the design tool

·         Updating the team members that the design was modified with a link to the update in the design tool

·         Choosing a programmer to implement the changes

If one of these does not happen, there will be misunderstandings and people may spend time working on an older design, or it just might not happen.

In many cases, team members have a lot of improvement ideas (tools and processes) even when they do not realize it. Therefore, it’s always worthwhile to raise these issues up in front of the team as much as possible, and then try to implement the new ideas as soon as possible.

Try making sure everyone has enough tasks for several working days ahead. Many times, certain tasks can be blocked, and it is important that everyone knows they can move on to the next thing at any time.

Areas of Responsibility

While this is important in office work as well, areas of responsibility take on additional importance. If something is thrown into the air but there is not an agreed individual that is responsible for its completion, it may not happen. Each task should have a known person who is responsible for it even if there are several people who are involved in its execution.

Wellness and Team Building

Perhaps the least obvious part about working remotely today is how to keep camaraderie within the team despite the physical remoteness. Different things will help different teams. This topic should be brought up for the team’s discussion. Here are some tips that can help:

·         Focusing on work and creating together creates closeness by itself

·         Daily meetings and video chat

·         A group chat that is dedicated to entertainment and laughter (and not work)

·         Physical meetings on a weekly or bi-weekly basis

·         Corporate evenings at least once a quarter

Published by Vitali Zaidman

https://vzaidman.com

Web Architect and Blogger @Welldone Software

Image

Hanan Zakai|4 years

The return of the black swan

The Coronavirus first weeks of troubling uncertainty sent Hanan back down memory lane to the 2008 collapse

The Coronavirus first weeks of troubling uncertainty materialized to a worldwide epidemic as the numbers of infected people piled up. However, Seqouia’s black swan post was the trigger that sent me back down memory lane to the 2008 collapse and to  Seqouia’s RIP Good Times presentation.  In those years I was CEO & Co-founder of Bit2go (R.I.P) a mobile startup facing the turbulent Subprime crisis. We were in the quest for seed investment, a process that faded out into three years of bootstrap struggle. We survived until a second black swan of mobile regulation change in Israel took us out of business. The good news is that nobody remembers the 2008 slope (besides some of the “casualties”) and even in those hectic times worldwide success stories emerged. iPhone made its first steps; Netflix was just pivoting from DVD rentals to content streaming to become two outstanding business success stories. The bad news is that many won’t make it, especially those looking for preliminary rounds or those that their next round is planned to Q2-Q3 2020. But then again, black and red pixels are all over the WWW, you don’t need me for that. I will try to provide some insights from my experience (B2B2C startup), avoiding the obvious ones, ideas that may help you cross this ocean of uncertainty:

Black swan
Black swan

In the past twenty years, I met thousands of startups/entrepreneurs, one phenomenon that really puzzles me every time is the fact that most of them had their story prepared and straightened up in case they will fail. Yet, despite their inherent optimism, they weren’t prepared for success. The roots for this course of events lies in the fact that most ventures start with a limited budget and tight schedule, once they succeed the race become even faster and they can’t stop, investors pushing, customers “piling up” demanding support, new features and after all if it isn’t broken why fix it, let’s keep on running. However, sooner or later, sprinting turns into jogging, which shifts into strolling up to the very low pace of crawling. Then usually the entrepreneur understands or forced to by his clients to perform “open heart surgery” to his system.

Get closer to your customers: Face it, new money isn’t coming any time soon. You can only count on your venture’s ability to bring in cash. For this purpose, you should define your top strategic accounts and put all your weight behind these connections, they are struggling too, make the relationships broader and deeper, try to build new solutions, technological and business. At Bit2go We developed several products and services for “Partner”, ones that created revenues that carried us long after we stopped sending out our onepager.

Short-term commitments only: The known reality of cut costs etc.… is obvious. However, for those of you that already received significant investment, I would recommend shortening your planning cycles to not more than two months, in which you will perform ongoing monitoring of your venture. Don’t sign any agreement that will cost you expensive legal consulting to get out of or will inflict on your startup brand identity. At Bit2go we used freelancer developers and we divided the work to increments of several weeks, parallel to the delivery dates for our customers. Once the worst scenario became a reality, we just needed to conclude the last chunk of work and continue putting our efforts and resources in surviving mode.

Always have (independent) plan C: Those of you that know me, probably heard more than once my moto of always have (a rolling) plan B, as one of the basic rules for those who “failure is not an option” for them. However, in these chaotic times, failure may result in more than not meeting your targets, but meeting chapter 11, you should also have independent plan C and D, there is an imminent option of not having an overtime. At Bit2go we had plans B and C, rolling after a great MWC (Mobile World Congress 2010). Unfortunately, Plans B, C with European mobile operators required substantial resources and time that wasn’t feasible due to plan A elimination by the Israeli mobile regulation change of 2010.

Stormy times are imminent, black swans will continue to return every several years, yet swans have any other type of bird eventually will fly away and if will play your cards right you will find yourself running after a flock of white swans.

Published by Hanan Zakai

Technology Division Manager @CodeValue

Image

Omer Barel|4 years

Battle-Tested Terraform Deployment – Part II

Hello Everyone!

In my previous post, we started discussing the deployment of Infrastructure to Azure using Terraform and Azure DevOps. If you haven’t done so already, please read this post first.

In this post, we will cover the setup of the components and create our first pipeline to test our code!

CI / CD Architecture Flow

GitHub Repository Configuration

My code is on GitHub. We will connect the GitHub repo with Azure DevOps pipelines engine in order to build and deploy our Terraform infrastructure code.

At a high level, this is the flow of code in our repository, between branches:

end-to-end ci-cd flow
  • Our features, enhancements, etc. will be done on the pr branch. This can be called feature branch or any other descriptive name
  • On this branch we will work locally, validating the terraform code, without actual deployment
  • Once we feel comfortable with the result, we will create a pull request (thus the name of the branch, pr) from the pr branch to the dev branch
  • To validate the pull request and approve the merge, we will run a pipeline in Azure DevOps
GitHub pr -> dev pull request validation using Azure DevOps Pipeline
  • Once merged, we will execute another pipeline, that builds the code from the updated dev branch and creates a terraform plan artifact
pipeline artifact
  • We will deploy the terraform plan to Azure using the validated artifact and Azure DevOps release pipeline
dev environment release pipeline
  • Once the code was successfully deployed to Azure, we will create another merge from dev to master, tag the code and create a release on GitHub, thus assuring only validated code exists on the master branch and giving us a checkpoint to go back to in case we need to in the future.
tagged release on GitHub
  • Lastly, we will deploy the validated code to other environments, such as qa, staging and production
multiple environments deployment

Now, let’s head over to Azure DevOps and start preparing our pipeline, so we can connect it to our branches. We will add some code later.

Connect Azure Key Vault to Azure DevOps

In the previous post we created a service principle and Key Vault in Azure and created several secrets to hold the sensitive information (SPN id, secret, etc.)

The concept is to use secrets from Key Vault as pipeline variables in our Azure DevOps pipeline. Let’s connect Azure Key Vault with Azure DevOps to accomplish that.

For a complete overview of variable groups and Azure Key Vault, read this. The short version:

  1. Go to Variable Groups in your project
  2. Toggle Link Secrets from an Azure Key Vault…
  3. Choose you Azure Subscription and the previously created Key Vault. If you haven’t setup a connection to Azure yet, you can configure one here
  4. Once connected, choose all secrets to be used as variables
Create Variable Group with Key Vault Secrets

Create our “PR-Validation” Pipeline

This pipeline will be used to validate our pull request before we merge our code from the pr branch into the dev branch.

  • In Azure DevOps, go to pipelines and create a new pipeline
  • Choose GitHub, authenticate if needed and choose your repository
  • Click on starter pipeline to get a basic template, modify the name to azure-pipelines-pr.yml and click save and run
  • This is a simple “hello world” pipeline that we will modify to meet our needs
  • Once completed, edit the pipeline and insert the following text:

Before we continue with the configuration, I would like to pause for a moment and explain the pipeline and the logic behind it, as we will use it throughout our project.

name: Build-$(Build.BuildId)

trigger: none
pr: [dev]

pool:
  vmImage: 'ubuntu-latest'

In the above section we define the build name to use the BuildId.

This will give us unique name for each build and we will use it later in release and tagging as well for complete end-to-end traceability (Basically, we can track back from a deployed resource in production all they way to the commit that created it).

We also set the build to trigger whenever there is a pull request to the dev branch and nothing but that.

Lastly, we define our build agent to be taken from a pool of ubuntu machines.

steps:
- task: TerraformInstaller@0
  displayName: 'Install Terraform version $(TF-Version)'
  inputs:
    terraformVersion: '$(TF-Version)'
- script: terraform init -get=true -upgrade=true -backend-config='storage_account_name=$(sa-name)' -backend-config='container_name=$(blob-name)' -backend-config='access_key=$(sa-key)' -backend-config='key=$(key)'
  workingDirectory: '$(Build.SourcesDirectory)'
  displayName: 'Terraform Init'

The above section has 2 simple steps:

  1. Install Terraform on the build agent (I haven’t explained the Terraform code yet, and we will dive deeper into it later)
  2. Run a shell script that initialize terraform with Azure backend

The steps are generic as possible. This means we’re not “tied” to Azure DevOps and we can implement the same flow in any CI engine of our choice (Jenkins / GitHub Actions / etc.) with small modifications:

- script: terraform workspace select '$(Dev-WS)' || terraform workspace new '$(Dev-WS)'
  workingDirectory: '$(Build.SourcesDirectory)'
  displayName: 'Switch to Environment $(Dev-WS)'

In the above section I want to emphasise the use of Terraform Workspaces.

Without diving too deep, workspaces gives us the ability to create multiple instances of environments from single code. Meaning, the code is a template to create a given environment and workspace will allow us to use the same code and create dev, qa, stg & prod environments from it.


Variables

As you can see, we have severals variables we use in the pipeline, all can be identified by the syntax $(variable). Let’s configure the connection to our previously-created variable group to define values for these variables.

  • Click on the 3 dots and triggers to configure input for those variables
azure-pipelines-behind-the-scenes
  • In the opened window, go to the variables tab and then to variable groups
  • Click on link variable group and connect the one you created earlier DevOps-KV
  • You should now have values for the following variables in your pipeline:
    • sa-name
    • blob-name
    • sa-key
    • subscription-id
    • client-id
    • client-secret
    • tenant-id
    • repo-name
    • repo-username
    • repo-password
  • If any of the above doesn’t exist in Key Vault, you should go ahead and create them and then make sure you connect them to Azure DevOps, as I explained earlier
  • Additionally, you should create another variable group to hold other non-sensitive variables such as TF-Version (value=0.12.17), Dev-WS (value=dev), etc.
  • Note you don’t need to create values for any variable that starts with $(Build.) as it is a system pre-defined variable. You can read all about these variables here
  • Rename the pipeline to Terraform-PR and click save and queue. Your pipeline will likely fail as we didn’t put any Terraform code in our repository. This is fine as we will do that soon

Connect our pipeline to validate code merge

The goal is to test code before we merge it. Let’s head over to GitHub to make that connection.

  • Go to your repository in GitHub and create a new branch from the master branch and name it dev
  • Go to settings -> branches -> add rule
add branch protection
  • For an in-depth explanation about GitHub protection, see here
  • For our use case, we will configure the following:
    • Name pattern: dev
    • Protect matching branches: check require status check to pass before merging and choose the pipeline from the list. It should auto-appear in the list since we run in against our repository in the previous step
defining branch protection rule

Recap

In this post, we learned:

  • How to connect Azure Key Vault to Azure DevOps and use secrets from Key Vault as parameters in our Build Pipeline
  • Configure pipeline-as-code in Azure DevOps using YAML file syntax
  • How to connect GitHub repository to Azure DevOps and protect a GitHub branch from merging verified code

What do you think about the process so far? Were you able to test it for yourself? Leave us a comment or DM me on Twitter and share your thoughts!

In the next post we will dive a deeper and start deploying our infrastructure code into Azure! If you’re into Kubernetes and Terraform, be sure to follow our blog as exciting updates are just around the corner!

Image

Leehee Gerti, Director of Marketing @CodeValue|5 years

.NET Conf – Post Event TLV & Haifa

Thank you for joining us at the .NET Conf – Post Events held at Tel-Aviv and Haifa. These events were the local Israeli events sponsored by CodeValue, following the global .NET Conf 2019 online event.

It was great meeting all of you, for you, and for those who wanted to be with us but were unable to, we’re attaching below the presentations given on the day.

.NET Core is getting bigger and more mature. In fact, .NET Core 3, released on 23/09/2019, is the last major release of .NET Core and is considered feature complete when compared with the Standard .NET Framework. The next version to be released during 2020 is .NET 5 which marks a huge step forward for the platform and a unification of all the platform’s runtimes.

.NET Conf 2019 is an annual online event showcasing many of these advancements and capabilities. Following the event CodeValue hosted the local IL events at two locations (TLV & Haifa), where our experts highlighted the main news and interesting capabilities that .NET has to offer this year. All sessions were given in Hebrew.

We are here for you

Need Consulting or development services? we’re here for you .


Net Conf Israel – Intro & Building Cloud Native Apps with .NET Core 3.0 and Kubernetes

Eran Stiller– Chief Technology Officer @CodeValue, Azure MVP, Microsoft Regional Director


Blazor and Azure Functions – a serverless approach

Alex Pshul – Software Architect and Consultant @ CodeValue


What’s new in c# 8.0

Moaid Hathot – Senior Consultant @ CodeValue


Cloud Debugging A Revolutionary Approach

Alon Fliess– Chief Architect @CodeValue, Azure MVP, Microsoft Regional Director

CodeValue is growing, and we are always on the lookout for talented people, so please feel free to check out our current open positions here

Join the Alpha team
Join the Alpha team https://codevalue.com/hr-main/

Image

Leehee Gerti, Director of Marketing @CodeValue|5 years

Architecture Next 2019

A few weeks have passed since Architecture Next 19, and now it’s a perfect time to share with you our thanks, thoughts, and lectures.

340 software architects, team leads, CTOs, VP R&Ds, developers and other tech personas who design and affect the development of software products and services, attended our conference. They came from more than 100 different companies ranging from startups through government agencies to international corporates to enjoy 11 thought-provoking sessions delivered by CodeValue’s top experts including some of the leading figures in the realm of software architecture in Israel.

The software development world is developing at a tremendous pace. New technologies and platforms are abundant, and things that were brand new a year ago sometimes suddenly seem like ancient history. At the conference, we discussed revolutionary concepts and tools and demonstrated how they can be applied towards making your next software system a better one.

Keynote: From Monolith to Microservices – Lessons Learned in the Real World

The keynote speakers: Alon Fliess – our Chief Architect and Eran Stiller – our CTO, shared with the audience their take on “Lessons Learned in the Real World – From Monolith to Microservices”

From Monolith to Microservices – Lessons Learned in the Real World

Main hall “Executive Track” sessions:

Following the Keynote, we split into two separate tracks. At the main hall we hosted the “Executive track” with these sessions:

“It’s a Serverless World” by Eran Stiller

“Modern IoT Trends” by Alon Fliess

“Service Mesh – The missing piece” by Tomer Shamam

“Micro Front-ends, Myth or Reality?” by Amir Zuker

“When Process and Architecture Meet – Application Modernization from the Project Perspective” by Ronen Rubinfeld

“Information Visualization in Big Data Systems” by Erez Pedro (video/presentation soon to come)

“Hands-on Track” sessions:

At the second hall we hosted the “Hands-on track” with the following sessions:

“WebAssembly – Future of Web?” by Guy Nesher

“Data Analytics at Scale: Implementing Stateful Stream Processing” by Michael Kanevsky

“Istio & Envoy” by Omer Barel

“Serverless IoT Story – From Design to Production and Monitoring” by Alex Pshul & Moaid Hathot

We are here for you

Need Consulting or development services? we’re here for you .

Image

Ilana Glotman, VP HR |5 years

Talent Hiring @CodeValue – Behind the scenes

Candidates often ask us what we are looking for while searching for talents? what our hiring process looks like? and how they can prepare? I’ll be happy to answer these questions by providing a little peek behind the scenes of the talent hiring process at CodeValue.

Ilana blog
Ilana Glotman, VP HR @CodeValue

Who are the talents that we are searching for at CodeValue?

I am happy to say that the answer to this question is not a checklist of education and experience. We hire people who are passionate about software, love coding, and are eager to learn and adopt new technologies.

Of course, some positions do require specific experience, but we believe that most of the technical skills (programming languages, frameworks, versions, etc.) can be learned with the right training plan, while personal characters like positivity, flexibility, out of the box thinking and passion for learning are part of candidate’s “DNA”.

We are flexible in terms of education and years of experience since our hiring process include coding tasks and interviews that enables us to identify talented developers with the right DNA. We are looking for people who prefer working in a dynamic environment, enjoy changes, and prefer working on multiple domains and technologies over working on a specific product over a long period. Most of our candidates and employees developed software since they were kids and for them it’s more than just work. However, for some people the passion for software was born at the later stage of their life and even after having a different career path.

How does the hiring process at CodeValue looks like?

Our hiring process is flexible, and we try to adjust it per each candidate, depending on his or her background, professional experience, interests, and career aspirations. We have several screening methods, such as phone interviews, coding tasks, technical, and HR interviews, that help us to identify whether the candidate is a good match for our company, while the recruiters and the hiring managers decide what steps and in which order are relevant for the specific candidate.

The hiring process at CodeValue begins with a deep CV examination and phone conversation, to get to know the candidate, tell him/her about CodeValue, and understand what he/she is looking for at the current point for his/her career. Based on this conversation, we decide on the next steps of the process.

I’ll describe each of the next possible steps:

Technical phone interview – a phone conversation with one of the CodeValue developers, who’s an expert on that relevant field, to map candidate’s knowledge and potential, provide more information about a career at CodeValue and answer possible questions.

Coding task – all software professionals at CodeValue are talented developers, who love coding and remain hands-on, therefore our hiring process always includes one or several coding tasks or/and existing code review.

Face-to-face interview – an interview with one or more CodeValue senior experts, usually based on candidate’s code. We are not fond of tricky questions so our technical interviews usually look like a conversation about candidate’s code, design, and architecture decisions. During the interview, we try to get to know the candidate, learn about his/her professional skills, and understand the potential and learning ability. We wish to map the current level of candidate’s knowledge, based on his/her experience so far, but it’s also important for us to understand what the candidate is passionate about and wishes to learn in the future. In most of our technical interviews, the interviewers are happy to share their knowledge and teach the candidate something new, and it’s an additional way for candidates to get to know us better.   

HR interview – an interview with the HR representative, sometimes together with the hiring manager, to get to know the candidate better, focusing on the soft skills and person’s “DNA,” to indicate whether there is a good match for our organizational culture and specific position. It’s also the time to address any issues and concerns and answer the candidate’s questions.

Since people tend to be nervous before HR interviews, I suggest following these two tips that might help:

1. Be yourself – share your success stories and challenges, don’t be afraid to talk about situations when things didn’t go as you planned, we all make mistakes, and it’s important to be able to learn from them. I suggest thinking about relevant examples from your career experience, that you can share during the interview.

2. Be prepared – do some research about the company, if you know someone who works at CodeValue, it will be the best way to learn about us. Be familiar with your CV (it’s not as obvious as you may think), decide what’s important to you to tell the interviewers (examples for success, challenges, unique things about you, special interests), think what will make you a good CodeValue employee and how will CodeValue contribute to your career.

The tips above may seem counterintuitive, but they are not. On the one hand, while preparing for the HR interview, it’s important to organize your thoughts (preferably on a piece of paper), think of relevant examples and do some research about the company. On the other hand, it’s important to act naturally and share real-life examples, so both you and the interviewer understand if there is a good match.

See you at the interview 😊

Good luck!

Image

Hanan Zakai|5 years

Oops, we made it..

Hope for the best, prepare for the worst

Yesterday once discussing a company’s IoT project I asked their VP R&D “If you need to update version, how will you do it” he thought a while and replied “Probably we will be required to send technician to every location” then I said “Your worst-case scenario will materialize if your pilot will be successful…” The people on the other side of the table nodded quietly. I call this “Oops, we made it”.

Oops we made it blog post image

In the past twenty years, I met thousands of startups/entrepreneurs, one phenomenon that really puzzles me every time is the fact that most of them had their story prepared and straightened up in case they will fail. Yet, despite their inherent optimism, they weren’t prepared for success. The roots for this course of events lies in the fact that most ventures start with limited budget and tight schedule, once they succeed the race become even faster and they can’t stop, investors pushing, customers “piling up” demanding support, new features and after all if it isn’t broken why fix it, let’s keep on running. However, sooner or later, sprinting turns into jogging, which shifts into strolling up to the very low pace of crawling. Then usually the entrepreneur understands or forced to by his clients to perform “open heart surgery” to his system.

Real startup lesson #1: Hope for the best, prepare for the worst, your solution’s architecture should support both cases.

Despite the fact that this is old news, in the past year it seems this “successful failure” life-cycles is shortened mainly at IoT projects, once putting the HW parameter into the equation, the brown stuff hits the fan very quickly.

Embedding the physical element (HW) into your solution transforms simple no-brainer attributes like connectivity, security, version update to become very challenging and quite quickly. That is why I strongly recommend once developing IoT projects one should use one of the Cloud vendors’ IoT platforms.

Real startup lesson #2: Once developing IoT project, fight your urge for quick and dirty, “I can do it” attitude. Invest time in learning and using an IoT cloud platform. The longer route is definitely the shortest one for success.

Published by Hanan Zakai

Sales Director @CodeValue

Image

Alon Fliess|5 years

Storage Spaces recovery war story

Storage Spaces is a technology that is mainly used by large file servers.

However, Microsoft has brought this technology to Windows 10 and Windows Server Essentials – what used to be the Small Business Server. You may use this technology when having a large disk space that can dynamically grow, by stripping many physical hard disks. This is a software RAID system that lets you create a Storage Pool out of many physical hard disks. You can have a simple spanning configuration (RAID 0), a mirror configuration (RAID 1) or a parity configuration (RAID  5). Storage Spaces lets you add more disks as well as remove old or broken disks. If you have a single disk failure, you are safe, in some configuration storage spaces can protect a multiple disk failure, but it requires many hard disks. Many PowerShell commands, as well as administrative user interface, let you manage Storage Pools and Virtual Disks that you allocate from them.

Storage illustration
Storage illustration

This is going to be a long “Diary” like a blog post that describes over a month of a fight to bring back a 35TB (70 TB physical disk space) logical disk to life. If you are in such a situation, I’m pretty sure that you’ll read every word, I read everything I can when I search for a cure to a problem.

Friday, November 30, 2018

I have never lost a single file. Hard drives are fragile; the internet bandwidth is still not fast enough to back up large amounts of data in a timely manner. My solution for file-insurance: have local redundant storage as well as a cloud-based backup.

I have a Storage Spaces pool that got larger over the years. It has 12 hard disks, in a mirror configuration. The smallest pair of disks are 3TB, then a group of 4TB, 6TB, 10TB and lately I added two 12TB disks. All together gives me about 70 TB, which is a 35TB in a mirror configuration.

You may ask why I need such a big disk space, most of the space is devoted to computer backups, which I only keep locally as opposed to sending to a cloud backup – it is too big and changes daily. Another big chunk of storage goes for my raw camcorder files and my video editing work. I also use it for other purposes such as my company’s financial documents, course materials that I develop and more.

For the cloud storage I use Code42 – Crashplan, but “only” 4 TB is stored there. This includes the most important files, excluding local computer backups and some files that originally were downloaded from the web and can be recovered by re-downloading them (MSDN subscription ISO files for example). All my pictures and some of my final video files have a copy on OneDrive (for easy phone access) and on Amazon drive (for Amazon Echo show).

My Storage Pool setup is a bit strange, but I have a good excuse for that. Originally, I was using the Windows Home Serve 2011 to run my small home office and my home computers. Mainly as a file server, computer backups, and internet remote access. When I moved to Windows Server 2012 Essentials, I created a new large Storage Pool, allocated a virtual drive and copy the files over. I bought several big hard disks, and for the convenient setup process, I connected them to my PC. I created the storage pool using the Windows 10 Storage Spaces capability which is like the Storage Space feature of the Windows Server. Using my PC makes it easier to copy from the old server to the new storage space. After copying the files from the previous server, over the network to the new Storage Space on my Windows 10 machine. I removed the old disks from the server and added them to the storage pool on my Windows 10 machine. This was the new pool and there was no way back since the original disks were joined in the new pool.

I built a new Server, much stronger than the old one. This new server had a Server board and a Xeon CPU, with 64GB of ram. The board has 13 SATA connectors, which was very important for having the ability to add more hard disks in the future.

With this powerful hardware, I installed the Windows Server 2012 Essential as a Virtual Machine. I did it by the book, installing a server Hyper-V host and installing the Server Essentials as a guest.

I moved all the storage pool disks from my Windows 10 machine to the Windows Server 2012 Essentials which I installed on a new server-based machine, but the system did not recognize it. I realized that the problem is a version mismatch. The Windows 10 version was newer than the Server 2012 version. So, I solved it in a very original way, I replaced the server Hyper-V host with a fresh installed Windows 10 client OS and made it the Hyper-V host machine. Windows 10 recognized the large disk! I used the disk management MMC snap-in to mark the storage spaces virtual drive offline and added it as a physical drive pass-through to the Server 2012 virtual machine. It worked!

When Server 2016 came out, I upgraded the system (the virtual machine) but left Windows 10 to be the host and to manage the storage pool. Over the years I added more and more disks. Each time that I added a new disk, I shut down the server virtual machine, brought the storage pool drive online again, manage the storage pool, and took it offline again.

At some point, it was almost impossible to add more disks. I still had enough SATA sockets, but I had no room in the Server enclosure. Therefore, I designed and built a 3D printed case.

Finally, I ran out of room in the server enclosure as well as on the board for the SATA connectors. I didn’t give enough attention to the risk of adding disks to the storage pool using USB-C. I assumed that the 10GB/S transfer rate should be enough. After all Storage Pool is a Windows 10 feature, a home user can use it on any hardware constellation. I was wrong!

Bad things happen

Last week I decided to add more storage to the pool since it was almost full. Since I don’t have room in the server case for more disks, I bought a four-disk enclosure box from Amazon and ordered two 12TB disks.

I used a USB-C card that supports 10GB/S to connect the external drive.  After adding the disks, there were times that the server got slow. Looking into the event viewer revealed that I have a problem with the “UASPStor” – Reset to the device was issued – every 15 seconds. And from time to time this warning: ” The IO operation at logical block address 0x24e049300 for Disk 13 (PDO name: \Device\000001df) was retried.” I have tried to install a new driver but lost the connectivity to the drive, so I rolled back the driver version. Since the USB-C drive is based on an old chip, I ordered a new USB-C card. I also followed these steps that I thought would help to resolve the problem.

I thought that my problems are over, but a few days later, I had another issue with the storage space. The virtual-disk disappeared. Looking at the storage space UI, I saw that the pool is unhealthy, and one disk is missing. I tried to remove the disk and reconstruct the virtual drive, but the storage pool job was stuck at 0. I read all I could find on the web about what can be done, just to find out that many people just give up and restart from scratch. I was determined to fix the problem and not give up.

I tried any PowerShell command related to Storage Space that I could find. From getting and Set Physical disk, to run and stop recovery and optimization jobs. The problem is that the recovery job returns immediately with success without doing anything and other jobs are stuck.

PowerShell:

I couldn’t remove the missing drive, and the “preparing for removal” did nothing. I also saw that the new drive that I’ve added had the “Prepare for removal” command option while other drives did not have it, it looked like the pool didn’t take the new drive as a replacement for the old faulty drive.

I’ve tried different commands such as:

Repair-VirtualDisk >Return almost immediately and does nothing.

Remove-PhysicalDisk > Can’t remove since the current virtual disk is unhealthy.

After spending a whole day and most of the night, I crashed for four hours.

Saturday, December 1, 2018

Recovery actions

In the morning, I first started to restore the most critical files from Code 42 – CrashPlan.

I then decided to check a tool that I found in one of the technical group answers. ReclaiMe Storage Space Recovery tool. I also sent an email asking for support from friends at Microsoft. The problem was that by then it was the weekend already. I got some answers that suggested to attempt a few PowerShell commands that I’ve already tried. They also asked me to send some Storage Spaces log files:

Microsoft-Windows-StorageSpaces-Driver%4Diagnostic.evtx

Microsoft-Windows-StorageSpaces-Driver%4Operational.evtx

I knew that it would take time to get the answers; I decided to continue trying to get my files back.

The ReclaiMe tool had found the storage pool, it found three of them although I only had one. I decided to pay for the tool, a $300 and pressed the “Find drives” option. This is the result:

I thought that this tool could fix the problem, however, the tool lets you restore the files by copying them to another drive. This requires extra storage. On my PC computer, I have a 1TB of free space, and I found additional free disk space on other computers on the network. I also added some 1TB and 2TB disks that I used to have in the server and they were in my drawer. With this extra space, I could start the restore process. I restored from Crashplan as well as from the original disks using the “ReclaiMe File Recovery Standard” tool that I had to purchase in addition to the Storage Space recovery tool. At first, it seems unfair, I just spent $300 on a recovery tool and now to do the actual recovery they asked me to pay more. Looking at the mail they have sent after acquiring the Storage Space recovery tool, I saw that they give a huge discount and the File Recovery tool is almost free. The recovery process takes a lot of time, but it lets you copy files that it has already found. It also knows how to build the original folder structure. I only wish it could repair the storage space instead of copying the files since you need a disk space, which is similar to the original disk space. I decided not to recover the computer backup files – it is 14TB, I’ll do a fresh backup once the server is up and running again.

After more than 10 hours it scanned only 2% of the disk, it found most of the files, so I decided to recover some of the folders before letting it continue the scanning. According to the documentation, this means that I may not be able to copy everything until the scan is over. Since these files originated from the Web, I could download them later if I need any of them. One thing to notice is that from time to time the ReclaiMe File recovery tool complained about a missing hard disk:

This tells me two things, first – the USB-C is still having issues, and this may be the reason I couldn’t repair the storage space. I was planning to get the other USB-C card soon, or I instead may purchase a SATA expansion card. The second annoying thing is that the recovery process is waiting for human interaction – to press the OK button. To automatically press the OK button I tried these three programs: Buzof, DialogDevil, and ClickOff. None of them work. I wrote a piece of code that did it:

#include “pch.h”

#include <iostream>

#include <Windows.h>

using namespace std;

int main()

{

    std::cout << “Starting…\n”;

    int times = 0;

    while (true)

    {

        HWND hParent = FindWindow(nullptr, L”Drive offline”);

        if (hParent == nullptr)

        {

            cerr << “Can’t find parent window, retry in 10 seconds” << endl;

        }

        else

        {

            ++times;

            cout << “Posting message for the ” << times << ” time.” << endl;

            PostMessage(hParent, WM_CLOSE, 0, 0);

        }

        Sleep(10000);

    }

}

I compiled it as a C-Runtime statically link x64 Win32 console app and ran it as an administrator. With this I no longer had to manually press OK. I think that ReclaiMe should change the dialog and add a timer that will automatically do a retry.

Since file restore is not as robust method comparing to restoring from a backup, I decided to restore from the CrashPlan everything that I have there, and everything else to restore using ReclaiMe.

Sunday, December 2, 2018

New SATA card – New Day?

I went to work, knowing that in the evening I will have to continue my Storage Space restore work. When I came home, a package from Amazon was waiting – a new SATA PCIe Expansion card. The Reclaim File Recovery was at 13% and there were about 100 times that my code dismissed the warning dialog window. I decided to stop it. I just wished ReclaiMe had a way to stop without losing the process – this is a reasonable feature when we talk about a process that takes days to finish – more about it later.

I shut down the machine and replaced the USB-C card with the new SATA card and somehow managed to pack the additional disks into the Server.

I was hoping that the Power Supply can handle the two additional hard disks, the entirety of 16 disks in one box.  According to this PSU online calculator, my PSU could handle it and even more drives if need be.

I turned the machine on, and when it was up, I saw that it recognized the disks. This is a strong feature of Storage Spaces. As a software-based RAID, It knows how to build the storage pool, no matter if you swap drive locations, SATA connectors with USB or even the whole Windows machine.  With a little more hope, I tried the PowerShell storage command and the Storage Spaces UI, to my disappointment nothing has changed.

I noticed that Repair-VirtualDisk shows something for a very short time. I used Camtasia Studio Recorder, captured a video and slowly moved frame by frame, and this is what I’ve found:

It takes only 3 frames of the video. A very strange behavior.

Back to plan A, I needed to restart the file recovery process. I lost almost two days of scanning but now the scan should be faster since the connection to the disks is via SATA and not a broken USB link.

After four hours, it was in 0.27%, not faster then the previous scan, however, the annoying disk disconnected dialog box was gone.

I contemplated another approach to fix the virtual disk: Maybe I should install a new Windows Server 2019 as a dual boot system and use its advanced tooling to repair the file-system. This might work; however, it might also be destructive, Will I be able to use the repaired disk back on Window 10 and Server 2016? I sent another email to Microsoft knowing that answers will come only on Monday afternoon when it is morning in Redmond.

I couldn’t go to sleep yet, I decided to search in those log files that I’d sent to Microsoft and I found this:

Drives hosting data for virtual disk {8420A2E7-6021-4294-A856-3CF76D94B11E} have failed or are missing. As a result, no copy of data is available at offset [Slab 0x0 Column 0 Copy 0 State 4 Reason 5] [Slab 0x0 Column 0 Copy 1 State 2 Reason 1] [Slab 0x0 Column 0 Copy 2 State 3 Reason 7] [Slab 0x0 Column 0 Copy 3 State 3 Reason 7] [Slab 0x0 Column 0 Copy 4 State 3 Reason 7. ]

How does this happen on a mirror array? Is there a way to fix it, at least bring the disk back, even with some corrupted files, or my last resort is to wait a week until ReclaiMe finishes to scan everything and restore the files to another set of disks? I started to realize that restoring from backup is probably my only hope.

Before I went to sleep, I checked the CrashPlan restore process, the Video folder was 664GB out of 1TB. At least one progress goes well.

Monday, December 3, 2018

It is 09:30 PM now, I am back from work and after a family Hanukkah candle lighting. ReclaiMe has scanned 16% so far. It is much faster than the previous scan, almost twice the speed, so the SATA card does enhance the scanning performance. The Crashplan restore of my video files is almost done, it says that it has downloaded 1TB of 1TB and that it needs 15 hours to finish. I hope it will be faster.

Problem Hypothesis

During the day I was thinking about the root cause of the problem and this is my hypothesis. The two new 12TB disks that were connected via USB-C PCI card that had a repeating restart problem. When the old 4TB disk failed and the server did not respond, after a while I had to turn it off. When it came back, the Storage Space sub-system realized that there is a problem with some data that appeared in the Storage Space management blocks but was not found on any disk (the failed 4TB disk and the new 12TB with the restart problem. In such a case the virtual-disk is unhealthy and the system will not bring it up. Is there a way to fix the file system? even with the risk of losing some files? Do I want to have a file-system that is fixed on the account of some lost files? Maybe just for the sake of easy restoration. With ReFS, there is noChkDsk utility since the file system is resilient if you change a file, it writes a copy of the data in other disk locations and uses a transaction like to commit the change. ReclaiMe says that this behavior leaves many copies of the files on the disk and allows them to be restored to even older versions of these files. Later I learned that there is a task-scheduler task – a data integrity scan, that scans fault-tolerant volumes for latent corruptions and that it has never run on my machine since the disk is marked as offline…

Power Outage – No way!

☹

The Power Company left a note notifying us that there is a planned power outage for two hours, tomorrow morning. I don’t remember when was the last time that they shut off the power, this is something that they don’t do often, but it is going to stop the file scanning. I decided to hibernate the machine in the morning and continue the scan when I’m back home in the afternoon. To do so I must enable the Windows 10 Hibernate option. I usually disable hibernation to spare the SSD disk space that the Hiberfil.sys file occupies. The command powercfg /h /type full reenables it. No, I don’t see the Hibernate option in the power menu. So let’s try one of the options here. Still no Hibernate option. Let’s try hibernation by issuing the shutdown /h command. The computer shuts down almost immediately. Oh no! it came back as a standard boot. Back to square one with file scanning 

😊

I got the hibernate option, at least I could start the scan now, hibernate in the morning and rescan from noon.

☹

I am trying Hibernate again, now from the power menu. No, a standard boot again… 

I checked the event log:

Windows failed to resume from hibernate with error status 0xC000007B.

Hibernation did take place, but the resume failed. Too many problems… I decided to stop for a day and resume the file scanning tomorrow after the power outage.

I’ve got an email from Microsoft, they think that I have a problem that is happening due to more than one drive is having an error, they told me that they still did not give up and they will reach out and tell me what to do later today, their assumption is like my hypothesis.

Tuesday, December 4, 2018

I am back at home; the power outage took place in the morning and by noon they restored the power supply. My wife told me which UPS worked and which turned off immediately. At least now I know which UPS needs a battery replacement.

in two hours the Microsoft Connect() online conference should start. Until then, I’ll restart the recovery process. On my way home, I stopped at a computer store and bought another 10TB Hard drive. I am going to use it as a restore file destination.  

I ran the ReclaiMe File Recovery and opened the xml files that stored the Storage Space disk array that was discovered by the ReclaiMe Storage Space Recovery program and found out that the file is no longer valid since there is a new drive.

I had to run the Storage Space Recover again.

I had to identify the disks that are in the array or identify those that they are not. As you can see, they are Disk 5, 10 and 15.

The discovery process takes time:

And we have a go…

Checking my email, there is a message from Taylor, the Microsoft software engineer that is kindly helping me to solve the problem:

Okay, looking at these logs, it looks like two other drives are intermittently failing IOs.  Both of your WDC WD12 1KRYZ-01W0RB devices are seeing IO errors including timeouts and errors indicating the device is no longer present.  It also seems that some of our resilient metadata got hit by these failures, and that’s why space isn’t attaching.  Luckily this is something I can fix, but I’ll need to get a tool out to you.  It will probably take until tomorrow or Wednesday, but I should have something that at least will let you attach the space.

Thanks,

Taylor

Amazing, there might be a light at the end of the tunnel! Meantime I’ll continue with the restore process. I started to copy the files that were downloaded from Crashplan. It’s about 1.5 TB by now. I also started to restore other files.

Wednesday, December 5, 2018

No sign from Microsoft yet, it is 13:30 PM at Redmond, so there is still a chance that I will get the repair tool today. When I looked at the Crashplan restore process I found out that one of the hard drives that I got from my drawer is failing. I removed it and continued to use only the 2TB drive which is newer. When I tried to restart the Crashplan restore, I didn’t find any file. It took me some time to understand that Crashplan had marked all the Server backup files as deleted. I don’t know why maybe it is since over a week had passed since it had a connection with the Server. I contacted Crashplan support, they open a ticket and they will reply at a later point. I asked the Crashplan software to show deleted files and I resumed the download process.

Up until now, I restored from Crashplan 419,428 files – 1.82 TB of storage. ReclaiMe has found 2,542,741 files – 20 TB, and it only scanned %11 of the disk.

Thursday, December 6, 2018

Still no answer from Microsoft. The Restore from Crashplan continues. I’ve got answers from Crashplan support:

Hi Alon,

Thank you for contacting Code42 technical support.

My name is Lawrence and I am helping out Cecilia on some of her tickets as she is out of the office.

I am happy to see that the error message has gone away and that you can access your restore. For the files being marked as deleted, this happens when CrashPlan can no longer see files and folders during a file verification scan. This can happen if the files/folders have been deleted or are no longer accessible at the time of the scan. CrashPlan by default does not remove these files and will update their status in the backup instead. You can verify that your settings are this way by checking your deleted file retention settings.

Once the files have been restored and a file scan can see and access them again, you should see the files updated properly in the backup and no longer marked as deleted files. Please let me know if you have any questions or comments.

Best regards,
Lawrence J
Customer Technical Support
Code42.

OK, no problem with Crashplan. The ReclaiMe scan is also continuing, by the end of the day, it was 18%. I decided to copy some of the usable files from the content that ReclaiMe has found so far, just for the case that the scan will stop again and I will need those files. I will need to restore these files again after the scan is over since there might be other files in those folders that the scan has not yet found.

I asked ReclaiMe to support the following questions:

  1. Is there a way to know if a scanned folder contains all its files or should I need to wait until the scan is over, before copying it?
  2. Will I know if there are files that the scan did not find?
  3. Can I save the progress, so if the machine goes down because of an error or power failure, I can restart from the middle and not start the scanning process again (I had to restart it already twice and lost three days)?

I hope that they reply soon.

Friday morning, December 7, 2018

It’s been a week that my Server is off. I am planning to bring it online using the filesystem that I already restored. It will work with one 10TB disk, while I’ll continue to scan and restore other files. I can do that since the server runs as a virtual machine. I’ll change the disk setting (Drive E) to have the new 10TB instead of the old storage spaces based virtual disk. I hope that it will work. I will stop the client computer backup service; I need much more disk space for backups.

Friday afternoon:

I decided to postpone my intention to bring back the server today and wait another day or two until I restore the “File History Backup“. Its weight is about 1TB and it is backed up to Crashplan.  I don’t want to have another unstable service, and client computers need this folder to push the history files from their local cash.

Checking the file history backup at Crashplan, I found a problem, I need to change my plan of restoring “File History Backup”. According to Crashplan, I didn’t back up all of it. I didn’t back up my own file history files:

I did it because I have a separate backup of my Windows 10 PC and my Surface Book 2 to Crashplan. When you do a cloud backup and you have too many files that may change and need to be backed up, they compete over the network upload bandwidth. You should choose a backup set that contains the most important files. Since my desktop and laptop are backed up to Crashplan anyway, I decided not to backup file history. I am not sure if it was a wise decision. File history contains previous versions of files in a default granularity of one hour, i.e. it saves all tracked files that have changed since the last time it saved them, every hour. If you write a document or edit source code, you can go back to previous versions. The server backup has a granularity of one day, and if you go to pass backups, the granularity becomes weeks, and months. Today most of our files end up in the cloud, in our mail server, SharePoint, google/amazon/OneDrive/Dropbox drive, social networks, GitHub,… some of those places also have versioning capability, but File History Backup does it locally, every hour. Crashplan also gives you history (old versions) backup, in a one-day granularity.

I must decide if I want to initiate the File History Backup (after all I can always go to Crashplan and seek for the old file there) or should I wait until I restore it with ReclaiMe.

Talking about ReclaiMe, I got a mail from their support:

Hello Alon,

In ReclaiMe File Recovery there is no capability to save the state of the software, but we do have such capability in our ReclaiMe Pro software (www.ReclaiMe-Pro.com) which is designed for data recovery technicians. I have issued a full key for you so that you can solve your case (35 TB refers more to complex cases rather than to “home user” cases):

XXXX-XXXX-XXXX-XXXX

Download the software at http://ReclaiMe-Pro.com (request the link) and then activate it with the key above.

Please do not publish the key in the web, it is just for you)

Also, did you check the files the software found? I mean did you preview them, preferably images or pdf? Are they OK?
With ReFS, you need to wait till the end, we do have algorithms to bring ReFS data earlier into the scan (and most data are brought within 3-5 % of the scan) but still, some files can be found at the end.

Best,
Yulja P.


ReclaiMe Support Team


They are so kind and helpful. Do I want to stop the scan and start over with the “Pro” version? No, I will continue to use the standard version scan and only if it fails, I’ll switch to the pro version.

The “Pro” version has other capabilities, such as Partition Recovery, even for Storage Spaces. I don’t think that it will fix my problem since the problem is not in the partition, but in the Slab metadata files, but I can try it once the scan is over and after I’ll copy all my files.

The Crashplan restore is over. The only folder that I didn’t restore is File History Backup, which does not contain the history of my user. I decided to restore the backup of all other computers/users from Crashplan and restore the file history backup of my user with ReclaiMe. When this restoration process will be over, I will bring the server back.

Saturday, December 8, 2018

New day, new problem. One of the disks is causing problems again:

This time I’m a bit nervous. Last time the problem was the USB-C connection, now everything is connected via SATA. Looking at the SMART information, it says that there are many Interface CRC Errors:

The problem might be the SATA cable. I am not going to fix it until the scan is over. I again ran the code that I wrote that automatically dismisses the error dialog. I’m almost sure that the problem is not the hard disk, but the connection. Or the ReclaiMe app itself, since there is nothing in the system event log that reports anything about disk problem. ReclaiMe reports about two different disks that get disconnected, I think that having the problem with two different disks is something that is unlikely to happen. Anyway, my “Dismissed Dialog” application does the job.

I also decided to copy all the files that the scan already found, just to be on the safe side, and not regret that I didn’t do that when I could. File History Backup files are not yet fully discovered by ReclaiMe, you see folder names as numbers:

ReclaiMe does not know yet the real folder name. This means that you can find files but can’t reconstruct the folder tree. I can’t use it until it finds all.

I shot Microsoft another email. Just to let them know that I still need their support.

Restoring files took all day, by now it restored 2TB, it will probably take the night to finish. I checked some of the files and the directories structure and I feel good about the restoration results. Most of the folders contain all the files, only the File History Backup is not in good shape yet. But that’s not as important. Anyway, I’ll let ReclaiMe finish the scan and restore the files again.

Sunday, December 9, 2018

The restore process is over. It restored about 4TB of data, and it looks good. I also looked again at the File History Backup folder and saw that the scan had found some of the folder structure, but not under my account, which is the one that I did not back up to Crashplan. The scan process is now at almost 43%, so there is still a good chance that it will find all of it. I began thinking about restoring the Client Computer Backups.

It weighs 11.5TB and it looks like the scan process had found all of it. There are two concerns, it will take about 3-4 days to copy it to another disk, and I don’t have such a large disk. I have several days until the scan is over to decide what to do.

Monday, December 10, 2018

It had passed 50 percent!

I’ve got an email from Taylor from Microsoft. He would like me to collect more information before he can send me the tool that will fix the problem. I replied, asking if this safe to run the commands that collect the data while the scanning process is ongoing. I will try to fix the pool after the scan and recovery are over – to be on the safe side.

Tuesday, December 11, 2018

I’ve got an mail from Taylor, he agrees that it is best to finish the scan before doing anything else.

Wednesday, December 12, 2018

In the morning, the scanning process was at 67%. The auto-dismissed program closes the disk disconnected dialog 4700 times, but it is now appearing less frequently. I sent an email to Taylor asking him if he thinks that the pool can be repaired to a state that it is safe to use, or I will be able to attach the virtual drive and copy the file over to a new disk array. If I will be able to fix the pool, I will take the risk and not copy the Client Computer Backup folder, otherwise, I may buy an additional 12TB hard disk just for that.

The File History Backups folder tree is not yet fully discovered by ReclaiMe, I think that this happens because of this folder changes often and it contains many deleted files that ReclaiMe finds anyway. I’ll wait until the scan is over, but there is a good chance that only the repair tool that I’m expecting from Microsoft might recover this folder.

Wednesday Noon

☹

A phone call from my son, Dad, there is a power outage, all UPSs at home are beeping, oh no! not again 

Wednesday Evening

I’m back home, back to square one. This time I decided to run the tool that I’ve got from Taylor from Microsoft. It worked and created lots of data. I zipped it (1.7GB after zipping) and uploaded it to OneDrive. Once the upload process will be done, I’ll share it with Taylor. Meanwhile, I installed the ReclaiMe Pro. It has a different user interface and many more options. It looks a bit outdated, but I don’t need it for the look, I need it, so I will be able to scan the disk and save the progress.

It runs much slower, maybe it is something that I didn’t set correctly. The Save state button is disabled, but the documentation says that it will be enabled after a few minutes of scanning.

Thursday Morning, December 13, 2018

I’ve got an answer from ReclaiMe support that says that scanning speed should be on a par with the standard version. And indeed, the scan became faster, it is now the same speed as the ReclaiMe standard edition. Despite the “Save state” button is still disabled.

Thursday Afternoon

I’m back home. Back to square one. For some reason the scan stopped, the PC did a reboot:

I knew that the system should not be updated often because I set the Update Advanced Option to not install updates unless it is very important:

I guess 160 days had passed… nevertheless, now I asked the system not to do an update for 35 days.

Friday, December 14, 2018

The new scan reached 7.8%, it found the main directory structure and most of the files, but the “Save state” button is still disabled. I really need to save state if I want this scan to come to an end.

Saturday, December 15, 2018

Scanning continues, still, the “Save state” button is disabled.

Sunday, December 16, 2018

Scanning continues, still, the “Save state” button is disabled.

Monday morning, December 17, 2018

Scanning continues, the “Save state” button is still disabled; it is now at 64%. I’ve got an email from Taylor, Microsoft. The email contains a tool and instructions that supposed to fix the problem. I really want to run the commands, but it will stop the file scanning, and if something will go wrong, ReclaiMe might not be able to work anymore. I decided to finish the scan, copy the files again, excluding the Client Computer Backup, and then run the tool. I must do it this week before the people oversee go to the Christmas vacation and there will be no one that could help. I think that on Wednesday I can start the copy, on Friday I will run the tool.

Monday afternoon

I’ve got an answer from ReclaiMe support, they asked me to try to pause the process and see if the “Save state” button becomes enabled and it did.

Tuesday evening, December 18, 2018

The scan is almost over. I saved the process again.

Tomorrow I’ll copy the files!!!

There are still lots of unclassified files, files that are found but their location in the directory structure is not known. I hope they will be sorted out when the scan is done. If not, I’ll copy them, just to have them in case I’ll find that there is an important missing file. I hope that I won’t need it anyway because of the fix that I’ve got from Microsoft, but I’m doing it to be on the safe side. As Tailor from Microsoft said: “Better safe than sorry”.

Wednesday morning, December 19, 2018

The scan is over; to be on the safe side I saved the final state. I started copying the files. It has found 400GB of unclassified files, which is either strange or there is a real problem with the integrity of the filesystem. It says that the copy process of about 5TB is going to take about 32 hours; I hope that it is a wrong early estimation.

Wednesday evening

I’m back home, it is over 12 hours, the copy process stopped because of an “overflow” problem. I continued the copy, but I probably lost 10 hours. Now it says that it is going to take 7 days to copy the files, this is a wrong estimation for sure.

Thursday Afternoon, December 20, 2018

The copy is over, I hope so because there is no sign that the copy finished successfully. I checked the property of the root folder to find out that it probably contains a copy of all the restored files that I needed. With 5.33TB and 1,962,497 files, it looks promising.

It’s time to run Microsoft’s tool that should fix everything.

Reboot, and: Get-VirtualDisk MainPool | Connect-VirtualDisk

Starting the virtual machine, the server is back.

Yes, the disk is back. Now I need to run:

Start-ScheduledTask “\Microsoft\Windows\Data Integrity Scan\Data Integrity Scan”

I’ve got this in the event log:

OK, I think that I know what has happened. I ran the last command on the server VM, According to this: https://github.com/MicrosoftDocs/windowsserverdocs/blob/master/WindowsServerDocs/storage/refs/integrity-streams.md, it can’t fix ReFS if it is mounted as non-resilient disk and for the Server, the ReFS file system is on a single non-spaces disk since it is a physical disk path-through in the VM configuration. I shut down the server and run the command on the host, and it was able to fix those problems:

Friday, December 21, 2018

The server is back, but the storage pool still lists a non-existing retired disk. I decided to go and fix it:

It succeeded:

The pool is fixed, but I still find corrupt files that need to be fixed. I decided to take the safe pass and copy all files to another disk, over the LAN. Since I don’t have enough disk space, for “Client Computer Backup” and “File History” I wrote a simple program that traverse the disk and read each file in it, this will ensure that if there is a corruption, it will be discovered, and either be fixed by ReFS integrity mechanism, or be reported as an error. I could use one of the utilities that scan files such as Everything, or WinMerge or an Anti-Virus scan, but I wanted to make sure that I read all file information, so I developed my own.

Saturday, December 22, 2018

Huston, we have some problems:

Both processes found problems; I fixed the program that scans the files to ignore errors and continues and restarted it. I tried figuring out the problem with the file copy. It complains about a missing file, and the file is not there. I copied this file from the Crashplan restore folder, and it continued. I needed to do it again for another file.

It makes me wonder, should I compare the original files with the files I restored from Crashplan?

I downloaded WinMerge, and this proved my suspicion:

WinMerge can copy the missing files, but I decided to use Robocopy and check the result with WinMerge, I started with one sub-folder:

The result is good, so I did it for all other restored files.

Sunday, December 23, 2018

Continue with the ReFS fixing, I decided to balance the pool, it is another process that takes time and it may trigger ReFS hilling:

And my code that scan the files continues:

As you can see it continues to find problems, ReFS fixes them.

I did see some files that it could not fix, and trying to copy them resulted in this error:

When I copied the file again it succeeded, this shows that ReFS eventually fixed the problem. I think that I am going to spend a couple more days recovering the filesystem before I’ll bring back the server. Then I will try to fix the client computer backup and the file history.

Monday, December 24, 2018

There are some corruption that ReFS can’t fix. I copy over the restored files. I am starting to think that I will need to create a new virtual disk and to copy files over, just to make sure that the file system has no corruption.

I decided to try to recreate the pool inside the Windows server VM. To do so I disabled the storage space driver: sc config spaceport start= disabled

I knew that there is a risk that the Server 2016 may change the storage space in a way that Windows 10 – the host will have trouble to use it later. When the server started it couldn’t recognize the storage space. All the disks were there, but each one on its own. I decided to shut down and go back to my regular configuration.

Tuesday, December 25, 2018

I finish going over the Client Computer Backup files, 3 of them could not be restored and were not even found in the files that ReclaiMe restored. I decided to give it a try and brought the Server back. I ran the Computer Backup Recovery process on the server and it failed.

Going back to the host, I found this:

I continued to go through other files, and found out that I have too many corruptions that cannot be fixed:

Even if the event log shows that ReFS was able to fix the problem, the file is still corrupted. Sometimes the file is completely gone – deleted by the system with no warnings and sometimes it can’t be repaired. Maybe this blog post is true:

https://social.technet.microsoft.com/Forums/windowsserver/en-US/cd28095d-e421-4538-9a9f-a15260e79a75/refs-test-with-corrupt-data-does-it-work?forum=winserverfiles

At least for old versions of ReFS on Storage Spaces.

I decided that I must extract anything that I can from the current ill filesystem and recreate a new one. The main problem is that I have no more hard disk space to copy all. I already have the data that ReclaiMe restored and the data that I restored from Crashplan – a total of 10TB. Now I need to have another 5-6 TB for the original server filesystem copy – at least for the non-corrupted files, and I am not going to copy the Client Computer Backup. It is going to take another day or two until I’ll finish copying all. The final state will be, three copies: The data restored by ReclaiMe, the data restored from Crashplan, the data that was copied from the original server – at least the part that is in a good shape. Once I’ll have the new storage space, or maybe now more than one storage space, I will copy everything. I’ll start with the files that came from Crashplan – I trust the backup to be authentic, then the files from the original server – only those that are not in Crashplan backup, and then the files from ReclaiMe – those that don’t yet exist – the ones that got lost because of the corruptions.

I might lose some files, but these are not as important, the important files are all backed up in Crashplan.

I also decided that I will create a new host for the VM, I will install a Windows Server 2019 host and have the same Server 2016 VM hosted in the 2019 Hyper-V. I will also move from pass-through physical disks to virtual disks. I’ll create several VHDX virtual drives on top of ReFS and in the Server 2016 I’ll create NTFS filesystem instead of ReFS. This will provide a much richer file system capabilities, for example, I will be able to use Azure Backup that can’t be used with ReFS. Moving from path-through configuration to a VHDX based will also enable VM Checkpoint capabilities.

I will not upgrade the Server 2016 itself, because in Windows Server 2019 Microsoft remove the Server Essential Experience capability, although I found this, it is not supported and I will stick with Server 2016 for now.

Wednesday, December 26, 2018 – Thursday, December 27, 2018

I didn’t do much, I just continued to copy the original files to a new disk using Robocopy. Some of the files are corrupted. In the beginning, for each corrupted file, I manually copied a good version either from Crashplan restore, or from ReclaiMe restore. When I found out that there are too many corruptions, I decided to skip those files and later merge from the restored folder as I stated before.

Friday, December 28, 2018

The copy progress continues, I think that it will be over by the evening.

I created a list of files of the original server files, to be able to compare the restored version with it. The list is not full since ReFS deleted many corrupted files, but this is what I’ve got to work with.

A Week Later, January 8, 2019

After more than a month, I have a working server. I didn’t lose much, mainly I lost the computer backups, which I will soon finish the backup process of all of them. The file-history was restarted for the client machine, I can still search for old files, but the file-history user interface shows only files from the last couple of days.

I reformatted all hard disks and rearranged them in the server case. I found another disk with some problems and I decided to take it out.

My current setting is much better than before. The host system is Windows Server 2019 instead of the Windows 10. On the Windows Server 2019, I have created 3 storage pools. I separated the computer backups from the main file system. The storage pool virtual disk is formatted as ReFS. I have created a VHDX virtual disk in each storage pool and I formatted the file system to be NTFS. This allows, for example, Azure Backup that is not supported under ReFS. NTFS forced a limit of 16TB on one disk, but even the computer backup doesn’t need such a big disk. Since all drivers are based on virtual disks, I can take a VM checkpoint, something that cannot be done when you have a pass-through physical disk. The Windows Server 2019 handles the pool and the ReFS, the disks are not offline as it is used to be in my previous configuration, hence corruption prevention is always on.

To make sure that I got all my files I ran the Karen’s Directory Printer on the new server file system. I then compared the two big file lists and found out which file was missing or modified. I had several of them that I found on the restored files that ReclaiMe restored. Utilizing the MD5 file hash I could find files that had been changed. There were about 2000 such files. When I investigated the content of the files, I found that the file that was copied from the original folder, after applying the fix from Microsoft, was corrupted while the files that came from Crashplan and from ReclaiMe were in good shape. I had to restore only two files from my OneDrive backup. These files were added to the server just before the failure occurred and they have not existed on Crashplan.

I also installed a disk monitor utility on the host machine:

Conclusions:

  1. Backup, Backup, Backup, everything, even files that come from the Web, if you need them, back them up, it will save you a lot of time.
  2. Don’t create Storage Pools that are too large. Create two or more small pools – smaller troubles, faster recovery time.
  3. If there is a hardware problem related to disks, even if it just warning, fix it as fast as possible.
  4. Run disk monitor software and replace disks as soon as there is a warning or error.
  5. Replace old disks, if they are out of warranty, just do that. There are some disks that can run for a long time, however, each year hard disks become chipper and larger, when you replace an old disk, you get a better and bigger one.
  6. Be careful when connecting a disk to storage pool via USB, prefer a certified JBOD enclosure.
  7. On a Windows client operating system such as Windows 10, there is a good chance that Windows Update will kick up and reboot the system while you are in the middle of the restoration process. Suspend Windows Update, if you can.
  8. There are good people in the world that really like to help, at Microsoft, at Code42 – Crashplan and at ReclaiMe.
  9. Be patient, recover from a large storage pool takes its time.
  10. Don’t use a nonsupported setup, i.e. running Windows Server 2016 on a VM hosted on Windows 10, where the storage pool is created in Windows 10, and passed through as a physical disk to the Windows Server Virtual Machine…  

Thanks:

I’d like to thanks Yaron Bental that was so kind as to technical review the article and provides his feedback and fixes.

Image

Amir Shitrit|5 years

A sense of immaturity in our (software) industry

TL; DR

  • The software industry still has a long way to go, and it can borrow a thing or two from more mature industries.
  • Make sure you define the roles and responsibilities necessary for your project to succeed and hire the right people to fulfill them.
  • Clearly define the development process (it’s helpful if you have the right people to advise you about that).

Disclaimer: The opinions in this post are mine and are based on my own experience.

Putting aside the “community message” at the bottom of the poster below, we see this kind of signs in every construction site we go by, and that’s a good thing, because it gives the impression that the professionals trusted with developing the project know what they’re doing, and they do. At the very least, they know who is responsible for each aspect of designing and developing the project, whether it’s the architect, contractor, project management, or electrical engineer.

Therefore, It only makes sense that before starting a new software project, we should at least know the roles and responsibilities that are required to get that project going and the people and/or organizations that will assume those responsibilities. To me, this seems the logical first step before starting any real work – casting. And indeed, in every mature industry, this will be the first step, whether it’s the construction of buildings, filming of a movie, designing electronic devices, vehicles, planes, etc. However, In the software industry, this isn’t always the case, and if you get the casting wrong, especially at the management level, everything else won’t matter. For example, in one of the companies I consulted for, the CEO decided to recruit an accountant for the position of the project manager. That person had no knowledge in software engineering and did not possess any management skills. Since then, every new employee hired by that manager wasn’t suited for the job and the rest is history.

Ok, but why is it so?

I believe there are several reasons for this:

  • Our industry is relatively a young one, especially when compared to other veteran fields of engineering.
  • Software is “soft” while other industries are “hard”.
  • There has been a shortage of talent for quite some time now.
  • Companies seek to save money, and, as a result, often aim too low.

A young industry

Software is indeed a young industry, especially considering that computers have only been around for 70 years or so (wow! time sure fly!) and the advancements in the software are tightly coupled to the advancement of hardware. Moreover, computers keep getting faster and cheaper on a daily basis, and faster and more capable computers create new opportunities to develop more sophisticated software. Regardless of hardware, the software also keeps evolving faster and faster, to the point where it’s extremely difficult for software developers to keep up with recent advancements.

The fast pace, at which software advances, causes confusion when trying to decide on the “best” project management methodology for each project, be it SCRUM, Kanban, or SAFe, and while the jury is out on that matter, many companies still struggle to find the “most” suitable process to steer their development while others have no process at all. Since not having a clear process makes it hard to define the roles required for running the project, it only makes sense that it’ll be hard to know what to look for in new candidates. As an example, if you decide to go with SCRUM, you’ll probably need a scrum master and a product owner, but if you have no process at all, you won’t know that these are the roles you need.

Software is soft

A software system (at least any system that has real users) keeps changing during its lifetime while actually being in use – more so than tangible products like a car, a building or a smartphone. Some of the reasons for this are:

  • Customers and users keep coming with new demands.
  • Bugs are being detected and solved.
  • Quality attributes, such as security, scalability, throughput, and latency, change.
  • New regulations, such as GDPR, apply.

There are many reasons why software systems must evolve over time and accommodate new requirements, but I think the most important one is that it can evolve.

To be fair, it’s not so much about the ease and cost of changes as it is about the speed at which we can push those changes to our users. For instance, with a website, a simple refresh gets you to the newest version, and with a mobile app or a desktop app, simply update it from the app store. In short, the “soft” nature of software poses a challenge that other, more mature, industries need not face.

A shortage of talent

The software industry is rising and has been rising mostly steadily ever since it came to the world, and until we’re all replaced by code-writing bots (or not), it will continue to rise disproportionately to the growth in population. This is to say that we are in a dire need of as many engineers as we can put our hands on – even if it means we have to settle for their quality.
Hiring junior engineers is not a bad thing. In fact, it’s a necessity. An industry cannot grow without juniors and a senior developer must start somewhere.  However, this does mean we need at least some more experienced developers to guide and mentor the junior ones and, while this seems obvious to most of us, it really isn’t, especially since many companies (mostly startups) can’t afford to hire senior developers. OK, but why is it so bad, you might ask? The best-case scenario is that the project will fail. The worst-case scenario is that of real danger, and us developers, as Uncle Bob says, have important ethical responsibilities to write quality code. What happens when we don’t?

  • Private and personal information leaks to the public.
  • Identity and account theft.
  • Credit card frauds.
  • Confidential information is disclosed.
  • Autonomic vehicles and drones get hijacked.
  • People die.

The solution

The software industry is a young industry and highly dynamic which takes its toll on its development, but the most basic step towards a more mature industry is the casting: hiring the right people for the job, or at least getting someone to tell you who are the right people to hire.

Bottom line

Whether it’s waterfall or agile, monoliths or microservices, big-bang release or continuous delivery, the one thing we must get right is the clear definition of the roles and responsibilities required for a software project and the casting of the right people to these roles. By doing so, you will give the project a real chance and avoid throwing money away. One way of doing that is investing more in the hiring process. Another option is to turn to seek help with an expert company.

Published by Amir Shitrit

I’m a software architect and consultant @CodeValue.

Image

Omer Barel|5 years

Securely Provision Azure Infrastructure using Terraform and Azure Key Vault

Terraform is a great orchestrator for infrastructure provisioning. It has tight integration with Azure and you can provision just about anything with it. 

However, quite often security is overlooked in the provisioning process:

  • Credentials used to connect to Azure are not kept securely.
  • Virtual machines are created with weak passwords.
  • Passwords are kept in the terraform configuration file in cleartext.
Terraform and Azure image for blog post
Terraform and Azure image for blog post

If you ever thought to yourself “There’s gotta be a better way to do this…” then read on to gain insight on security basics when using Terraform and Azure.

We will leverage Azure Key Vault, a managed service to store sensitive data (such as secrets, keys, and certificates) to help secure our infrastructure provisioning process.

If you want to follow along, you will need:

Background

If you’ve been living under a rock for the last couple of years, let me give you a short background into the different components we will use:

Terraform is a software that enables you provision infrastructure using code. It does that by leveraging providers such as Azure, AWS, GCP, and others and provisions the infrastructure (virtual machines, managed DBs, networks, blob storage, etc.) on top of them. Terraform uses its own language called HCL (Hashicorp Configuration Language) to define the set of infrastructure to provision.

Azure is the cloud infrastructure offering from Microsoft. It allows you to consume infrastructure and other dev-related services as a service, on a pay-as-you-go model.

One of those services is called Key Vault, which is essentially a secure vault in the cloud. You can safely store sensitive information into a given Key Vault (such as passwords, certificates and such) and use them with other resources


Connect Terraform Securely to Azure

Let’s start with the very basic – connecting Terraform and Azure. This is a one time process that will allow us to then connect securely to Azure from within Terraform every time we want to provision or modify Azure Infrastructure with Terraform

Setup Terraform Service Principle Name (SPN) in Azure

Terraform recommends authenticating using a Service Principle when using a shared environment.

(Note: although you can use the Azure CLI as well when you’re running Terraform locally, I found that using a Service Principle in both use cases is a better approach and helps streamline the overall provisioning process).

Terraform needs the following information to authenticate with Azure:

  • subscription_id
  • client_id
  • client_secret
  • tenant_id

In the link I shared you can read how to retrieve those values. Once you have them, come back here.

We will create secrets for all the above values inside Azure Key Vault, and then use those secrets to authenticate Terraform with Azure.

Setup Azure Key Vault

Azure Key Vault is a managed service from Microsoft that allows you to store and access sensitive data in a secure way. Full documentation can be found here.

We need to create a Key Vault and grant our SPN permissions to the Key Vault (Note: although you can create the Key Vault itself with Terraform and grant the Terraform Service Principle access to the Key Vault, there is a bug in the process and access isn’t actually granted.)

Create the Key Vault using the GUI or AZ CLI. You can find details on how to do this using the GUI and using the CLI.

Once created, go to your Key Vault and create an Access Policy granting access to the Terraform SPN. See the image below for an example on how to accomplish that (I gave the Terraform SPN full permissions to the Key Vault. In a production scenario, you might want to limit that)

Store Terraform login information in Azure Key Vault

In the beginning, we created the SPN for Terraform and got the following data:

  • subscription_id
  • client_id
  • client_secret
  • tenant_id

To access it securely we should:

  1. Store the data in Azure Key Vault.
  2. Configure environment variables on our machine to use the secrets from Azure.

The process:

  • createsecrets.sh will create the secrets in Azure. Replace the data next to the –value parameter with the data you got in step I (You need to authenticate to Azure with a user that has permissions to run az keyvault secret set, to create the secrets). This is a one-time process.
  • mycreds.sh is a local file on your computer that you source (e.g. source ./mycreds.sh)whenever you need to make the connection to Azure. The data will stay in your shell environment as long as the shell is open. Again, to run this script you need to authenticate to Azure with a user that has permissions to run az keyvault secret show). This process is done once per shell session (normally when you start your workday)
  • Note that once sourced, the credentials are stored in plain text in your shell (output.shshows that).
GitHub open code keyvault
GitHub open code keyvault

Eureka!

Once sourced, Terraform will pick it up and use it to authenticate to Azure when you run Terraform init (which is the first command you run when you want to start configuring infrastructure with Terraform). Read on to see a live example of the process.

This way, no sensitive information is stored in your .tf configuration files! (and you can now save and share them securely with team members using source control, for example)

Optionally, you can source any other Terraform variable using the same technique. You don’t have to do it for our example, but it’s important that you will be familiar with this functionality.

For example, you might need to work with Azure AD and for that, you need the tenant_id. In such a case, just make sure to prefix it with TF_VAR:

GitHub open code mytfvars
GitHub open code mytfvars

And then you can declare a variable without a value in your variables.tf file and terraform will pick the value from your shell:

GitHub open code variables
GitHub open code variables

Connect to Azure and Provision Resources

At this point, you should be able to authenticate from Terraform to Azure using the data that is stored in your shell (We will test it together in a moment).

This gives us the ability to start provisioning resources in Azure using Terraform.

Let’s think of the following simple scenario:

  • You want to provision a virtual machine
  • You want to generate a login password that will be complex
  • You want to store and retrieve that password in a secure way

We can achieve this type of scenario rather easily by utilizing Azure Key Vault.

Create Key Vault Secrets using Terraform

This time, since we’re already connected to Azure, we will create a secret and store it in Key Vault using Terraform:

Save the below file in a folder on your computer and make sure to change the default value for the vault_uri andresource_group_name variable to the key vault that you created earlier:

GitHub open code
GitHub open code

Terraform “random” Function

If you look closely at the Terraform code, you will see a “random string” resource.

This is a terraform resource to create a random string. This is a terraform-specific resource, unrelated to Azure in particular.

You can use that to auto-generate the secret data and achieve 2 goals at once:

  1. Once again, you don’t put your secret data in clear text inside your configuration, allowing you to check it into source control safely and share it among team members
  2. You can create a complex string that can adhere to security requirements 

Open your shell and navigate to the folder where you saved the above file and run terraform init (this will utilize the connection to Azure and download any provider-specific information that might be needed to run the terraform plan). The output should look similar to this:

Code segment terraform init
Code segment terraform init


Run terraform plan to see what terraform will provision on Azure:

Code segment terraform plan
Code segment terraform plan

If everything looks good, you can go ahead and run terraform apply to create the resources in Azure:

Code segment terraform
Code segment terraform
Code segment
Code segment

The Outputs section will output the generated data to the console so we can store and use it in a safe place (In addition, you can query it later using Terraform or Azure CLI).

Use the created secret as VM login

Let’s take our previous example of creating a secret and add a simple VM config to it. For the sake of readability, note that this isn’t a complete configuration to set up a VM in Azure using Terraform. I just added the specific part that refers to the secret.

A full example of how to provision a VM in Azure using Terraform can be found here.

The important part here is the last few lines, starting with os_profile. Take a close look and see that the value for admin_passwordis a query that gets the value of the created secret

another code at GitHub image
code at GitHub image
code at GitHub image
code at GitHub image

To wrap it all up…

Let’s recap what we did:

  • Setup Azure Terraform SPN (Service Principal Name) to enable Terraform access to Azure.
  • Created Azure Key Vault to store the SPN login information and granted the Terraform SPN access to the Key Vault.
  • Authenticated to Azure using credentials that are safely stored in Azure.
  • Created other secrets in Azure Key Vault.
  • Used the created secret as the login password to a VM that we provisioned using Terraform.

So there you have it – simple and secure operation of Terraform and Azure, using Azure Key Vault.

What did you think? Do you have a suggestion to further secure the flow? Ping me back @omerbarel on Twitter and share your feedback

Oh, and one last thing –

Make sure to check back soon as I will take us to the next level – securely provision AKS (Azure Kubernetes Service) using Terraform and connect it to Azure Key Vault to further secure our environments

Published by Omer Barel

Bringing Dev & Ops closer together http://about.me/omerbarel

Image

Guy Nesher|5 years

NativeScript Part 2 Set Up Guide


Note: If you don’t have a working installation yet, head to part 1 of the series for a quick setup guide.

Welcome to part 2 of our NativeScript series. By now you should have a working installation of NativeScript with an Android emulator and are ready to dive into some actual code.

Native Scripts part 2
Native Scripts part 2

In the following posts in the series, we will build a multiple selection quiz game where users can select from a list of available quizzes (loaded from our server) and see how they are ranked against the current average.

While our app will run on both iOS and Android we will focus on the Android development (as iOS development requires a Macintosh).

The final product will look something like this.

Welcome to the QuizMonkey
Welcome to the QuizMonkey

Setting Up Our Project

NativeScript CLI provides us with an easy way to start a project –

`tns create <projectName> –template <templateName>`

There are several built-in templates provided by the NativeScript team, but for our example, we will use a custom template (which already includes our images and some base files). For custom templates, we simply provide the url of the Git repository (Github in our case).

So let us start by creating our project:

tns create quizMonkey –template https://github.com/gnesher/nativescript-workshop-template

If all goes well you should see a message similar to this:

Message 1
Message 1

A new folder by your app’s name will be created at the containing folder, and it will hold the application skeleton.

Start your emulator/simulator.

CD to your newly created folder:

Message 2
Message 2

And run tns run android

Assuming all went well, the emulator should start and you will see an empty application (during the first run a few additional packages will be installed, so it might take a minute or two depending on your connection speed). The result should look like this:

Result 1
Result 1

NativeScript supports live reloading, so let’s open our favorite IDE and edit ‘welcome-view.xml’ by changing our label from text=”welcome” to text=”welcome to Quiz Monkey”

Save and look for the change in your emulator.

Note: sometimes hot-reloading does not work when changing xml/css files, in these cases, you can make a small change on a js file (add a space for example) to prompt it.

Before we move forward let us quickly review the folder structure

Within our app folder we have pre populated two folders:

  • Views – will contain our individual views, currently, contain a single empty view called welcome-view
  • Shared – shared logic for our app. It also includes the mockData folder with some mock questions for our quiz.
  • App_Resources – contains our images in multiple size formats (to fit different device resolutions). There are several tools that help to create different size images such as – http://nsimage.brosteins.com/

Creating our first view – the Welcome Page

Our welcome-view currently contains a single xml file. Open it in your editor of choice and change the label to contain the following text – “Welcome To Quiz Monkey!”

<Page xmlns=”http://schemas.nativescript.org/tns.xsd”&gt;

<Label text=”Welcome to Quiz Monkey!”></Label>

</Page>

Note: Have a look at app-root.xml this is where we define which view loads on app start. At the moment we load the welcome-view but during the development, we may switch to different views to speed up the development.

Save welcome-view.xml and see how it looks on your emulator – you will notice that there’s an action bar with your app name – let’s hide it by adding the following property to the page tag

Page xmlns=”http://schemas.nativescript.org/tns.xsd&#8221;

actionBarHidden=“true”>

So far we only have a static XML file which isn’t very useful. In order to hook it up to our Javascript we use the ‘loaded’ event which triggers the selected function from our JavaScript file (file names must be identical). Let’s start by adding the following property to the XML file

<Page xmlns=”http://schemas.nativescript.org/tns.xsd” loaded=“onPageLoaded”actionBarHidden=”true”>

Now create a new file in the same folder named ‘welcome-view.js’. In the new file, export a function called ‘onPageLoaded’ and have it log something to the console:

exports.onPageLoaded = function() {

console.log(“welcome page loaded”);

};

After saving, you’ll see your log in the terminal:

Log in terminal
Log in terminal

* If you don’t see the console.log try restarting the tns compiler.

Note: You can start your application in debug mode, which allows you to use Chrome for debugging. Simply run tns debug android and you will be prompted with a link you can copy/paste into the browser. Visual Studio Code users can also debug using their IDE. See here for more details.

Before we can start adding more elements to our page we need to decide how NativeScript will organize the elements by selecting one of the provided layout containers (you can learn about other layouts in the following link).

For now, we will use the StackLayout. By default, it stacks items vertically (but it can also stack them horizontally by specifying its orientation to horizontal).

Add the StackLayout element to the root of the Page, and place in it the label and a new button:

<Button text=”Start Here”/>

Your .xml file should now look similar to this:

<Page xmlns=”http://schemas.nativescript.org/tns.xsd&#8221;

loaded=”onPageLoaded”

actionBarHidden=”true”>

<StackLayout orientation=”vertical”>

<Label text=”Welcome to Quiz Monkey!”/>

<Button text=”Start Here”/>

</StackLayout>

</Page>

Now add a tap event handler to the by adding the following property:

<Button text=”Start Here” class=”navigationButton” onTap=“onButtonTapped”/>

In our JS file, add an export for the ‘onTap’ function, which will only write to the console for now:

exports.onButtonTapped = function() {

console.log(“start button tapped”);

};

Run the application and try tapping the button, you should see our console.log in the command prompt

Add styles

Styling is an important aspect of any mobile application. The simplify the process of styling your app Nativescript decided to use a subset of CSS which should be familiar to most web developers.

A complete list of supported CSS commands can be found here

Now let’s Add some design to our app. We start by adding the the following classes to welcome-view.xml:

  1. For the StackLayout add a welcome-stackLayoutContainer class
  2. For the Label add a welcome-text class
  3. And for the Button add a navigationButton class

<Page xmlns=”http://schemas.nativescript.org/tns.xsd&#8221;

loaded=”onPageLoaded”

actionBarHidden=”true”

class=”welcome-pageContainer imageBackgroundContainer”>

<StackLayout class=”welcome-stackLayoutContainer”

orientation=”vertical”>

<Label text=”Welcome to Quiz Monkey!”

class=”welcome-text”/>

<Button text=”Start Here”

class=”navigationButton”

onTap=”onButtonTapped”/>

</StackLayout>

</Page>

Create a new css file called ‘welcome-view.css’ within the views/welcome-view folder and paste the following code:

.welcome-pageContainer{

padding: 25;

background-image: url(“res://monkeybackground”);

}

.welcome-text{

font-size: 45;

vertical-align: center;

text-align: center;

white-space: normal;

margin-bottom: 20;

color: rgb(49, 2, 2);

}

.welcome-stackLayoutContainer{

vertical-align: bottom;

}

* You may have noticed that our background image is located in “res://” this is our resource folder. NativeScript will choose which version (size) of image it will display based on your phone.

* It’s also worth noting that we don’t add the image extension (NativeScript support either png or jpg and will add the extension automatically.

The CSS we just added is only applied to the welcome-page, but we also need to add some global CSS for the app. To do that paste the following styles inside the ‘app.css’ file which holds style common to the entire app:

.imageBackgroundContainer{

background-repeat: no-repeat;

background-position: center;

background-size: cover;

}

.navigationButton{

background-color: white;

border-color: black;

border-width: 1;

border-radius: 50%;

}

You should now have a nice-looking non-functional welcome view:

non-functional welcome view
non-functional welcome view

Join us next month for the third part of the series where we will add the quiz list and question page. Until then feel free to contact us if you have a question / ran into a problem while following the post.

Published by Guy Nesher

Senior consultant at Codevalue, Israel

Image

Nick Ribal|6 years

TypeScript for skeptics

The ugly duckling: TypeScript’s past, present, and future

In front-end terms, TypeScript has been around for a while. After a decade in front-end, I ignore hype for about a year to see whether THE-NEXT-BEST-THING™ gains momentum and real-world adoption.

TypeScript was no exception. The fact it looked like another ugly attempt to put the “Java” back into “JavaScript” made me skeptical and reluctant to try typescript@1. It smelled too corporate and looked as attractive as office fluorescent lights…

Java logo with “JavaScript” erroneously written below it
Java logo with “JavaScript” erroneously written below it

But over time, and especially after typescript@2’s release, lots of smart folks began adopting and praising the language. Several trusted friends assured me that TypeScript is just “safer JavaScript”. In the spectrum from modern dynamic and functional languages to classical OOP, TypeScript’s gradual and structural typing, as well as type inference placed it closer to the former, rather than the latter category.

The thing that finally changed my mind was the reassurance that I won’t need to change my chosen coding style (functional programming) and could still benefit from type safety.

In addition, TypeScript steadily evolves at a surprising pace: gaining features, library and tooling support, resulting in ever greater adoption. It’s not an underdog, nor is it reserved for dull, corporate dashboards. Especially nowadays, with typescript@3 additions, it’s time to give it a chance!

Taming the beast: you control strictness

Friendly dog overlooking fence with scary “BEWARE OF DOG” sign

TypeScript’s gradual typing and flexible configuration allow you to be as lenient or strict as necessary. Tuning compiler flags enables the gradual conversion of legacy applications, as well as uncompromising strictness for new projects.

I believe in strict and automatically enforceable coding standards via static analysis tooling and mandatory code reviews. I’ve been teaching and preaching for clean code, encouraging and enforcing a tight feedback loop for teams I lead and clients for years.

Any tool, which can catch mistakes early and shorten the feedback loop deserves my attention. Immediate feedback from tests and static analysis reduces wasted time due to wrong assumptions and errors, making TypeScript a natural fit and the next logical step.

So when I began leading a long term, remote, greenfield project for a major client in the financial sector with uncompromising quality requirements – TypeScript fit like a glove. Handling money and implementing complex business logic make TypeScript’s features very appealing. I could test TypeScript at its full potential, as strict as possible.

“There ain’t no such thing as a free lunch.”

I’ll be frank: TypeScript has plenty of strings attached. You must invest in finding, setting up, configuring and integrating TypeScript compatible tooling and packages. The compiler can be too strict, the linter needs tweaking and you gotta jump through hoops to make tooling work for you.

A lack of alternatives for Babel plugins is an example: for better or worse, they are common and even mandatory for certain packages’ core features. TypeScript’s equivalent, “transforms”, are scarce to non-existent. So can you use a nice package like Emotion with TypeScript?

Grumpy cat says “NO.”

And you’ll face bugs in popular packages, missing types and breaking changes.

Then there’s the language itself: TypeScript is easy to read but is hard to write (unless you’re experienced with strongly typed programming). Assuming you’re not writing Java style but are typing the dynamic, expressive and flexible JavaScript you know and love – keeping the program sound is HARD. It is arduous work, it can be too verbose, and some error messages are vague and confusing.

Writing strict TypeScript requires considerably more work, compared to JavaScript’s naive CTRL+S, as soon as you think you’re done. Is it even worth it?

Yes, definitely!

Magic happens as soon as the compiler is satisfied: BOOM! Everything Just Works™. NO RUNTIME ERRORS. None, nada, practically ever!

Shia Labeouf MAGIC meme

Unless your logic is wrong (which no compiler can catch), JavaScript’s tiresome “save, run, fix, save, run, troubleshoot, fix, save, repeat… ad nauseam” cycle breaks in favor of almost boring predictability. Hard to believe at first, you keep running the program expecting it to fail – but when a strictly typed program compiles, things work!

Obviously, anything you can’t type still has to be handled: API calls, responses, missing permissions, timeouts, JSON.parse(), etc. But you actually focus on those, instead of your program’s basic (in)ability to run.

“But wait, there’s more!”

Compared to JavaScript, it’s amazing just how well TypeScript prevents bugs and detects edge cases. When the compiler uncovers a code path you’ve missed, which would’ve inevitably been a major bug, the value of the language really sinks in.

Refactoring and autocompletion are awesome: renaming, refactoring and moving entities between files, modules, and classes are built-in. Because TypeScript is developed in tandem with its language service (which is used by any TypeScript compatible editor), you get IDE capabilities in any editor – without a heavyweight IDE.

Renaming a React prop and its interface in Sublime (screen capture)

Evergreen documentation is another benefit: unlike comments and (non autogenerated) docs, type annotations – by their very definition – cannot rot, lie or get out of sync with code. They’re a clear and enforced contract between interdependent program parts.

A strict TypeScript codebase builds confidence in your program’s correctness, allows you to reason more clearly about it, and guarantees up-to-date documentation of its parts and their relations. TypeScript’s inherent traits compose elegantly, allowing and empowering developers to refactor and modify a program as it grows and evolves in new, unpredictable ways, as software usually does.

TL;DR: is TypeScript suitable for a given project?

“One does not simply TL;DR engineering tradeoffs” meme

As with all engineering decisions, you manage priorities and trade-offs.

In my view, TypeScript is less suitable for quick, temporary or exploratory work. Typically, it won’t help with incomplete ideas (which you want to test through code), quick POC’s, monkey patches, hacks or just playing with novel APIs or concepts. It’ll get in your way, distract, interfere and hinder your progress.

It’s noteworthy that several of my colleagues and friends disagree and attested to TypeScript’s helpfulness even in casual tasks!

When you’re investing in a long-term project, large or distributed teams, TypeScript is great! If reliability, maintainability, documentation, and correctness are the baseline requirements, then TypeScript is a solid choice that will benefit you and your team in your goal to deliver a better product to end-users.

Stay (type) safe!

Published by Nick Ribal

OSS, Linux, Web, front-end is what I ♥️ and do for a living as Consultant & Developer. Me and my family are digital nomads traveling the world! 

Image

Ido Braunstein|6 years

Create private Helm Charts repository with ChartMuseumUI

In this article, I will show you how to create your very own private charts repository using ChartMuseumUI.

But first, a little introduction…

Containers are nothing new in the infrastructure world, we are using Docker containers to package up code that has been built & thoroughly tested in a continuous integration environment. This container will then execute on any server (cloud and on-premise) that require Docker host only.
To scale your application and run containers on multiple servers, you will need a container orchestration tool, if you didn’t knowKubernetes had become a de-facto standard for orchestrating and managing containerized apps in production environments.
Once you begin embracing microservices, or if you want to place your infrastructure in a container, you need to find a way to not only scale your application but to install, configure, upgrade, and run the application in a matter of minutes.

Helm

If you ever used a package manager, for example, apt/yum/brew then I bet you already know how important they are and how they provide an easy way for installing, upgrading, configuring, and removing applications.
Helm is the package manager for the Kubernetes.

Helm is the first Kubernetes-based package installer. It manages Kubernetes “charts”, which are “preconfigured packages of Kubernetes resources”.
It allows describing the application structure through convenient helm-charts and managing it with simple commands.
Adoption of Helm might well be the key to mass adoption of micro-services, as using this package manager simplifies their management greatly:

  • Easy application deployment
  • Standardized and reusable
  • Improves developer productivity
  • Reduces deployment complexity
  • Enhances operational readiness
  • Speeds up the adoption of cloud-native app

Great tool right? I strongly recommend you to look at Helm-Cheatsheet for more information and examples for working with Helm.

Chart repository

The main global charts repository is located here and contains two folders:

  • Stable
  • Incubator

The purpose of this repository is to provide a place for maintaining and contributing official Charts, with CI processes in place for managing the release of Charts into the Chart Repository.
Stable Charts meet the criteria in the technical requirements.
Incubator Charts are those that do not meet these criteria. Having the incubator folder allows charts to be shared and improved until they are ready to be moved into the stable folder.

Demo

Prerequisite

In this demo, we will use ChartMuseum for creating and managing our charts repository and ChartMuseumUI web app to easily view our charts on the browser.
We will be using docker-compose so that and helm client installed is our only prerequisites.

ChartMuseum

ChartMuseum is an open-source for creating and managing your very own charts repository. The real power in this tool is the ability to choose the storage type from a wide variety.

ChartMuseum have support for cloud storage backends, including Google Cloud StorageAmazon S3Microsoft Azure Blob StorageAlibaba Cloud OSS Storageand Openstack Object Storage.

ChartMuseumUI

ChartMuseumUI is a simple web app that provides GUI for your charts so you and your team can upload/delete, view and share the technologies you are using with anyone at any time (in near future more capabilities will be added).
ChartMuseumUI was written in Go (Golang) with the help of Beego Framework.

Deploy

tl:dr

The following docker-compose file is defining ChartMuseum with Amazon S3 as storage and exposing ChartMuseumUI on port 80

code segment image version 2.0
code segment image version 2.0

Copy this file and run:

code segment image docker
code segment image docker

Easy, right? Now, we can add our private repository to our Helm client:

code segment image 2
code segment image 2

Let’s upload a chart into our private repository using the terminal:

code segment image
code segment image

Or, we can have it over to the browser and navigate to localhost using ChartMuseumUI:

ChartMuseumUI is at an early stage of development, Therefor code contributions are very welcome. If you are interested in helping make ChartMuseumUI great then you are more than welcome! Visit the project on GitHub.

Image

Alon Fliess|6 years

Code Archaeology – How to revive a more than 30 year old game

The CodeValue computer museum in the main meeting room in Yokneam.

When we opened the northern branch of CodeValue in Yokneam, Israel, about two years ago, we’ve decided to make a computer museum in our main meeting room.

I took all of my old 8-bit computers, peripherals, programming books and other old devices that I collected since I was a little boy and used them to decorate the room. We have Commodore 128Sinclair QLAtari 2600Amiga 3000Olivetti PC1Casio PB 1000Windows XP Tablet, Windows 95 laptop, and other machines as well as interesting artifacts such as punched cardsMagnetic-core memory. I also took all of my old computer magazines and books, so that anyone who is waiting for a meeting to start, can read about the first personal computers or learn 6502, x86, or Z80 assembler.

Alon fliess computer museum
Alon fliess computer museum

Commodore 128, Sinclair QL, Atari 2600,…

Among the computer magazines, I found an old folder from 1987. In 1987 I was an AFSstudent, I took part in a youth exchange program where I became an American teenager for six months. I was a family member of the lovely Counsel family in East Circle Drive, Whitefish Bay Wisconsin. I also joined the WFB High School and took the computer class. In this class, we learned the 6502 assembly language and programmed the Apple IIe.

Apple machine
Apple machine

Since I was a Commodore fanboy, I already knew the 6502 assemblies (My first computer at home was the Commodore VIC 20 where the only way to do something serious with 5KB machine was to use machine language). This was the first time that I had a real computer class and a very good teacher that helped me to polish my development skills. I very soon became the best student in the class, always getting A+ extra credits for all my projects. The folder that I found among my books and magazines contains all of my homework, many pages of long assembly code printed on a dot matrix printer.

computer course in the 80s
computer course in the 80s
old computer user
old computer user

One of the lists contains the last project – the Annoying Snake game.

Graphic project game
Graphic project game

So… how can I bring this game to life?

This code is about 1000 lines long. I thought – can I bring this game to life? can I use an Apple IIe emulator to run a code that I wrote more than 30 years ago when I was 17? I decided to give it a try.
First thing first, I took my phone and took a picture of each and every page of the code. On the one hand, it allowed me to magnify the text to find out whether it is ‘E’ or ‘F’ or other hard to read letters. On the other hand, sometimes I got a line that was cut to half between two consecutive pages and I needed to realize the opcode and data of that command.

The development and runtime environment

I started by searching for an Apple II emulator and found a very good one, the open-source AppleWin. This emulator has some very good features, among them the ability to accelerate the speed of the machine – very important for fast macro assembly compilation (the ASMcommand). Another great feature is the built-in debugger that helped me find those locations where I didn’t put the correct opcode or data.

Apple emulator for windows
Apple emulator for windows
LDA keyboard
LDA keyboard

The emulator has two disk drives emulator and it lets you switch between them, which makes it very easy to switch between the macro assembly disk and the data disk.

The macro assembler

Once I had the environment set up, I needed to find the macro assembler program that I used in the WFB high school 30 years ago. I googled for Apple II and macro assembly and some of the macro assembly directives such as. HS and found the S-C Macro assembly. The problem is that this assembly is a bit different than the one I had in my original code. It lacks some of the macros and it uses some directive in a different way. It took me some time to understand how to migrate the code to use this dialect, the main changes are: The .HS directive takes the data without spaces or dots, for example instead of. HS 80.80.0D you write .HS 80800D. The opcode .AS for strings works the same however there is no .AT directive. Searching the web, I have found that the .AT is like the .AS, the only difference is that the last character in .AT has the most significant bit set to one to sign the end of string – like the modern ‘\0’ without wasting another byte. I also found that the .AS can do this trick by adding dash as a prefix for the string: .AS –”E” The S-C assembler could not take an ASCII or binary data (or there is a way that I couldn’t find), so I had to migrate commands such as ORA #%1000,0111 to ORA #$87and CMP #’ ‘ to CMP #$A0, but this wasn’t so hard to migrate since the original code print included the resulting machine language code nearby the macro assembly text.

assembler
assembler

I should have spent more time browsing sites such as this one, to find a better match to the original assembly dialect, but I wanted to get started, so I decided to use the S-C assembler that I have found.

Entering the code:

Since the old printed pages were not easy to OCR, and since I had to migrate and translate many commands, I spent several hours typing, translating and migrating the code. I had found out that the S-C assembler cannot have line numbers above 9999, so I had to change the line numbers at the end of the code to overcome this limitation. After several hours I could assemble (compile) the code. It took me some time to understand how to run a binary code (the BRUN command) and how to list the files on the disk (The CATALOG command), eventually, it started, but very soon it stopped. I had to go over the source code again and verify that I got all correctly, I have found at least five places that I put the wrong opcode or data. I got into endless loops and invalid states, but the emulator built-in debugger was very helpful.

Bringing the game back to life

“Huston, we have a problem!” When I started this project I thought that the code list is all I need to revive the game. I found, however, that the game uses the BLOAD command to read files from the disk. Actually, there are two places that my original snake game uses the disk, to load and save the high score table and to load the 8 game rooms. For the high score, I read the code that saves a new high score and understood that it uses 160 bytes in from address 0x2000, where 14 bytes are used to hold the name, then a byte for the level and room number, and two bytes for the score. I wrote a very simple Basic program that creates an empty high score file:

10  FOR I = 0 TO 180 STEP 18

20  FOR X = 0 TO 13

30  POKE 8192 + X + I,255

40  NEXT X

45  POKE 8192 + I + 13,160

50  POKE 8192 + I + 14,255

60  POKE 8192 + I + 15,255

70  POKE 8192 + I + 16,0

80  POKE 8192 + I + 17,0

90  NEXT I

100  PRINT  CHR$ (4);”BSAVE HIGH SCORE,A8192,L180″

The other binary files are the room layout, a LO-RES binary file format that starts in address 1024, and spans another 1024 bytes. The problem is that the Apple graphics memory has holes. Reading some Apple II BASIC documents, I found out that I can create a lo-res graphics using commands such as GR to switch from text mode to graphic mode and COLORHLINVLIN and PLOT to do the graphics. I created a program in basic that generates rooms, for example:

… Room 8:

800  GOSUB 1100

810  GOSUB 1000

820  COLOR= 15

830  FOR I = 5 TO 33 STEP 8

840  HLIN 3,35 AT I

850  HLIN 3,35 AT I + 1

860  NEXT

870  FOR I = 7 TO 34 STEP 8

880  VLIN I,I + 3 AT I

882  VLIN I,I + 3 AT I + 1

884  NEXT

890  PRINT CHR$ (4)”BSAVE ROOM 8,A1024,L1024″

990  END

1000  REM DRAW ROOM BOUNDARIES

1010  COLOR= 15

1020  HLIN 0,39 AT 0

1030  HLIN 0,39 AT 39

1040  VLIN 0,39 AT 0

1050  VLIN 0,39 AT 39

1060  RETURN

1100  REM CLEAR SCREEN

1110  COLOR= 0

1120  FOR I = 0 TO 39

1130  HLIN 0,39 AT I

1140  NEXT

1150  RETURN

With 8 rooms and empty high score table, the game now runs:

To start the game, use: ]BRUN GAME

Brun game
Brun game
the annoying snake game 1987
the annoying snake game 1987

I prefer to change the keys and use the modern PC cursor keys. So press Y and follow the instructions.

Do you want to change keys?
Do you want to change keys?

The first room is an empty room with no obstacles:

Snake game
Snake game
Star of David graphics
Star of David graphics

Room 5 has a more complex obstacle.

The name of the game: “The Annoying Snake” came from its annoying music, so you should set the sound in the emulator to hear it:

Apple Win configuration
Apple Win configuration

I have succeeded running the game on other apple II emulators:

Android, the a2ix emulator:

the annoying snake game
the annoying snake game
an old Apple emulator in JavaScript
an old Apple emulator in JavaScript

Conclusion

I am very happy that I could bring back to life one of my oldest pieces of code.  I have two Amiga games from 1992 on the web:

old computer games
old computer games

However, I don’t have the ‘C’ language source code for these games.

I think that now I can say that I have more than 30 years of development experience!

You can find the source code and the binary here.

Image

Omer Barel|6 years

Automate GRAFANA Dashboard Import Process

In a recent project I did for one of my customers, we wanted to gain insight on our application. 

We are running a Kubernetes cluster, so we decided to deploy Prometheus & Grafana as our monitoring solution. Once deployed, we wanted to automate the process of importing various Grafana dashboards into the application.

While you have the option to manually import a dashboard once Grafana is up and running, we wanted to automate the process so we can keep Grafana as stateless as possible and ease the administration overhead when we upgrade or deploy it across different environments.

Before we dive into the “how-to”, I want to take a minute to describe the environment we’re going to work on, since it’s crucial to our automation process.

Grafana image
Grafana image

Environment Overview

Grafana is an “open platform for beautiful analytics and monitoring”. It will visualize your collected metrics and present them in a graphical way (line charts, histograms, gauges, tables, etc.) As an open-source application, you can deploy it in many ways (you can read about some of the options here)

We are working in a Kubernetes (k8s) environment, then we proceeded to choose a monitoring solution. We decided to deploy Prometheus Operator and Kube-Prometheus using helm. Since each of these 3 can be developed into a blog post (or series of…) on its own, I won’t go into too many details about them.

I will, however, point out that:

  • Helm is package manager for k8s allowing you to deploy applications to a k8s cluster (think of yum or apt for Linux)
  • Prometheus Operator allows you to deploy Prometheus on top of k8s, “in a kubernetes-native” way
  • Kube-Prometheus is a deployment of Prometheus that is using the Prometheus Operator to provide a running Prometheus instance together with various exporters and Grafana

Our Architecture will look like this:

Kubernetis cluster
Kubernetis cluster

What do you need in order to play along?

If you want to see it all in action, here’s what you need:

  • A running k8s cluster with version 1.9 and up with helm installed
  • Prometheus Operator & Kube-Prometheus deployed in your cluster (If this is your first time doing this, I highly suggest you read the documentation on github)

Run the below commands against your k8s cluster to install Prometheus operator on it:

kubectl create ns monitoringgit clone https://github.com/coreos/prometheus-operator.gitcdprometheus-operator/helm/kube-prometheushelm repo add coreos https://s3-eu-west-1.amazonaws.com/coreos-charts/stable/helm dependency buildcdchartstar-zxvf grafana-0.0.37.tgzrmgrafana-0.0.37.tgzcd../../..helm install--name prometheus-operator --namespace monitoring helm/prometheus-operator/helm install--name kube-prometheus --namespace monitoring helm/kube-prometheus/

Problem Statement

So, if we’re using Kube-Prometheus to deploy Grafana, and we want to automatically import a custom dashboard to it, how do we go about it?

  1. Prepare your dashboard for importing (or “template” it)
  2. Modify the Grafana deployment to include the modified dashboard
  3. Upgrade your helm release and watch the magic happen

Prepare your Dashboard for import

Start by creating your dashboard. It can be a fresh one created from scratch or one you took from the Grafana Dashboards page. For the sake of learning, we will use Kubernetes cluster monitoring (via Prometheus) to play with.

Download the dashboard .json file and edit it with your text editor, turning it from this:

Original dashboard .json
Original dashboard .json

To this:

modified dashboard .json
modified dashboard .json

Note how we wrapped the start and end of our original json to make it a template

This process is rather easy if you’re using an existing template. However, if you’re exporting a live dashboard from Grafana, there are a few “gotcha’s” that you should pay attention to:

  • If the original datasource isn’t called Prometheus in Grafana (let’s say, it’s called dev-Prometheus) you need to find all the references for it in the .json and modify them. For example:
json code segment
json code segment

Note that you will likely have multiple references to the datasource parameter in your dashboard.json file. You should modify all of them.

Modify the deployment to include the modified dashboard

Now that we have our dashboard ready, we need to add it to the Grafana helm chart inside kube-prometheus for Grafana to pick it up.

Reading through the Grafana helm chart documentation, this is what we’re looking for:

Adding Grafana Dashboards

You can either add new dashboards via serverDashboardConfigmaps in values.yaml. These can then be picked up by Grafana Watcher.

ServerDashboardConfig
ServerDashboardConfig

We will add a folder called custom-dashboards into the Grafana chart folder (located inside kube-prometheus/charts/grafanaand copy our .json there.

Note the comment at the bottom and name your file accordingly:

The filename (and consequently the key under data) must be in the format `xxx-dashboard.json` or `xxx-datasource.json` for them to be picked up.

We will add a configMap and call it “custom-dashboards”, directing it to load any json file in our “custom-dashboards” directory inside the Grafana chart:

Code segment GitHub Config
Code segment GitHub Config

Lastly, we will modify the main values.yaml located inside the Grafana chart directory, directing Grafana to load a configMap called custom-dashboards:

Code segment GitHub
Code segment GitHub

And we will also direct the kube-prometheus chart to pick up our new, modified Grafanachart. Open up the requirements.yaml file (inside the kube-prometheus folder) and modify it to point to our modified chart:

Code segment Grafana
Code segment Grafana

Let’s recap what we did:

  1. Created a configMap that will load any json files inside a folder called custom-dashboardsand named the configMap custom-dashboards
  2. Directed Grafana to load new configMap named custom-dashboards into it’s configuration
  3. Directed kube-prometheusto load our modified Grafanachart instead of the default one

Tip!

By doing it this way, we can add future dashboards simply by adding them to the custom-dashboardsdirectory and refresh the configMap in our deployment

Upgrade your helm release and watch the magic happen

Lastly, let’s update our kube-prometheus deployment with the new Grafana configMap and see if everything works

1helm upgrade kube-prometheus helm/kube-prometheus/

Lastly, let’s open Grafana and see our new custom dashboard in action!

Code segment Kube
Code segment Kube

Open up your browser and point it to http://localhost:5000, then choose our imported dashboard Kubernetes cluster monitoring (via Prometheus) under the home menu:

2

Summary

Prometheus is an amazing monitoring system, and Grafana lets us visualize our data in a beautiful way

When deploying the monitoring solution with the suggested approach (the “operator” way), we get a full-fledged monitoring solution, and we get a way to customize it to our needs in a kubernetes native way.

Meaning, when we need to modify a component’s configuration (Grafana in our case), we do it by modifying kubernetes components (helm charts, configMaps) instead of modifying software configuration files.

This way, we can maintain our own version of the software (our helm chart) and re-use it with our needed modifications in any environment.


So what do you think about Grafana dashboard import process? Do you want to know more? ping me @omerbarel

Oh, and one last thing – stay tuned for my future blog posts as I dive deeper into helm scopes, charts, sub-charts, and global variables.

Ciao,

Omer Barel,

DevOps Consultant, CodeValue


Image

Guy Nesher|6 years

NativeScript 4 – Windows & Android Set Up Guide

This is a part of a post series. Part 2 can be found here

Intro

NativeScript is an open-source framework that allows you to build native, cross-platform, mobile apps using Javascript (and potentially Typescript, Vue.js, and Angular).

While installing Nativescript itself is quite straight forward, managing its dependencies can get a little tricky, especially for developers who are new to the platform, doubly so when trying to use the Android emulator.

It’s important to note that NativeScript does offer its own installation guide/script – But, it’s a little outdated and doesn’t play well with the Android Studio (which we use to manage Android SDK / Emulators). This can cause some trouble or force you to install duplicate copies of the SDK so for the time being (the NativeScript team is working on a new solution) I recommend using a manual install.

In the next posts, we will explore the capabilities of NativeScript Core while developing a simple quiz application – so good luck with the installation and stay tuned for the next parts of this tutorial.

Linux and OSX can use the official NativeScript in the provided links.

Native Script blog post image
Native Script blog post image

Setting Up NativeScript and NPM

NPM (Node Package Manager) is our first prerequisite. Installation is very simple – go to https://nodejs.org/en/ and download the recommended version (LTS). Once you finish downloading the file simply run the installer which will guide you through the setup.

Once completed open the CLI (Command Line Interface by pressing the Windows key 

and typing “cmd”. Select the first match and in the new window run “npm” just to verify that the installation went according to plan (as long as you don’t see “npm is not recognized…” we are good to go).

Now we can install NativeScript – run the following command which will install NativeScript globally “npm install -g nativescript”

To wrap this step run “tns” – tns is the NativeScript command-line running it without further instructions brings up the help which should look something like this:

NativeScript command line
NativeScript command line

Setting Up – JDK (Java Development Kit)

NativeScript relies on JDK for Android development (Android apps are natively developer using Java). If

Download and install the Java Developer Kit – http://www.oracle.com/technetwork/java/javase/downloads/jdk10-downloads-4416644.htmlThe installation is straightforward, just follow the instructions.

Setting Up Our Dev Environment – Android Studio

Android Studio allows us to manage the Android SDK and our AVD (Android Virtual Devices). To start, go to the following link https://developer.android.com/studio/ download a copy and run through the installation process.

After finishing the installation run Android Studio. There are a few extra installation steps on the first run. Once you pass those, create an empty project (Start a new Android Studio Project link) – we just need to open the main interface so the details of the project are less important. The project creation process might take a minute or two as Android Studio will download/install a few extra dependencies.

* Once you create the project you might see a notification on the bottom right regarding pending updates – go ahead and install them as they often contain important bug fixes.

You should now see the following two icons on the top right corner 

The first manages your AVD (Android Virtual Devices) and the other manages the Android SDK.

We will start with the SDK manager (Second button) which opens a pane with 3 tabs – platforms, tools, and update sites

Platform tab
Platform tab

In the Platforms tab ensure that you have the API installed (currently Android API 28) and the latest operating system version (Android 8.0 Oreo or API Level 27) – if they are not installed simply check the box next to them.

Then go to the SDK tools tab and ensure you’ve selected the Android SDK Build-Tools, Android Emulator, Android SDK tools and Intel x86 Emulator Accelerator (most / all of them should already be selected).

For the Android SDK Build – Tools we also need to select a specific version – so go ahead and select the Show Package Details on the bottom right, this will expand the list with specific SDK versions – select the latest 27 variations (currently 27.0.3)

Finally, press apply to install everything that was missing.

Now go to the AVD manager (first button from the previous step) and select “Create Virtual Device”. You can decide which phone to emulate (I suggest the Nexus 5, this is purely for display purposes) and then press next to select the SDK version.

Create Virtual Device
Create Virtual Device

There’s a list of recommended SDK versions -we generally use the latest (28 at the time of writing this blog post). Press the download button and wait while it installs (you will need to accept some T&Cs).

Once you’re done, press next to see the Emulator creation form, here you can change things like the name, orientation, and a few other advanced options. For now, leave it as is and press finish.

Congratulations – you’ve created your first Android Emulator. Press the green arrow to lunch it and see that it’s working correctly.

* If the Emulator fails to start there’s probably an extra dependency which is still missing. Switch to the main Android Studio screen and in the bottom right corner, you will find the Event Log with details of the error. In most cases, you will also see a suggestion on how to fix it (link to a download).

Setting Up Environment Variables

Finally, we need to add an “environment variable” to allow NativeScript to find the Android SDK. To do that open the SDK manager again and copy the path of your SDK installation (you can see it in the top of the window)

Once you have the path press the windows key and type “system environment” then select the only option that comes up (edit the system environment variables).

Press the Environment Variables button then create a new system variable (lower option). The key is “ANDROID_HOME” and the value is your SDK path (for me the value was “C:\Users\IEUser\AppData\Local\Android\Sdk” but it may very well be different for you)

We also need to verify that Java correctly set up its own variables – if you can’t see a JAVA_HOME variable, you will need to add it yourself. Simply press the new button once more, and use “JAVA_HOME” as the key and the path to your Jave as the value (should be c:\program files\java\<jdk version>)

Wrapping things up

If everything went according to plan you should now have NativeScript installed and ready to use. To verify this, go back to your cli (from step 1) and run “tns doctor” this will run several tests to verify NativeScript is ready to use. The result should look similar to this:

NativeScript
NativeScript

Published by Guy Nesher

Senior consultant at Codevalue, Israel

Image

Moaid Hathot|6 years

To Microservices or not to Microservices?


Good software architecture is a combination of science and art.

It is really hard to get it just right, and once you’ve implemented your system, it is often hard to perform big architectural changes to it. i.e. it will cost you $$$!

you called my system a monolith meme
you called my system a monolith meme

There are a lot of factors you need to consider before designing an Architecture for your system, some of which are technical constraints. For example, today with the power of the Cloud, it is way easier to implement distributed systems that scale on-demand, than it was about 10 years ago, and systems that were designed at that time had to take it under consideration.

Even though good Software Architecture principles will always be true, some techniques and choices that are decided upon due to technical constraints may become obsolete once those technical constraints are expired.

Monolith is the new scary term in Software Architecture. It is often used as an insult by advocates of current trends that are strongly in favor of Microservice-based architecture.

What is a Monolith?

A Monolith system can be considered as an architectural style or design, in which all functional aspects and modules of a system are intertwined together as a single and self-contained unit, often as a single process in a single machine.

Such systems were historically common when working with (or even considered a byproduct of) traditional software design processes like Waterfall, which pushes for big teams with less focus on iterations, and more on launching a product with as many features as possible.

Are Monoliths that bad?

While Monoliths have advantages like being:

  • Simple to deploy as a single unit – There aren’t a lot of moving parts, sometimes it is enough to copy the files to a server.
  • Simple to perform End-to-End testing – Testing is done against a single application.
  • Easier to debug – It is usually easier to debug a single and self-contained system over a distributed system.
  • Easier to implement – It easier to implement new projects as Monoliths over a Microservice-based system for example.

They violate two important principals of Software Architecture: High Cohesion and Low Coupling.

Cohesion is a measure of the degree to which a module performs one and only one function. Coupling is the degree of how closely modules rely on and connected to each other.

We usually aspire for high cohesion and low coupling. Due to its nature, a Monolith tends to be less cohesive and by result tightly coupled.

Tight coupling and low cohesion have several implications. Software with Tight coupling and low cohesion tend to be:

  • Hard to modify – a change in one module usually forces changes in other modules.
  • Harder to reuse and test – Certain modules might be harder to reuse or test since dependent modules must be included.
  • Hard to scale – It could be challenging to scale a single module in a Monolith without the need to scale the rest of the dependent modules, even when there isn’t a need to (Scale-up).
  • Limits the dev stack – dependent components often have to use the same technology.
  • The DevOps process is hard.
  • Hard to understand – hard for a single developer or a small team to understand the entirety of the system.
  • Hard to update – can’t update a single part of it. You will have to update the system as a whole.
  • Less reliable – bug in a single module, like a memory leak, might break down the entire system.

In addition, the Architecture you use for your system doesn’t just impact the system’s structure; more often than not it has implications over aspects like deployment speed, team sizes and organization, and project management. Tightly coupled systems tend to lead to bigger teams, slowest shipping time and to less agility.

That being said, selecting a Monolithic architecture isn’t always a bad choice. There are situations, where a Monolithic architecture might be a good choice for you. For instance, when building POC’s or projects that are either small or will shortly live.

If you’ve been living under a rock for the last couple of years, let me give you a short background into the different components we will use:

Terraform is a software that enables you provision infrastructure using code. It does that by leveraging providers such as Azure, AWS, GCP, and others and provisions the infrastructure (virtual machines, managed DBs, networks, blob storage, etc.) on top of them. Terraform uses its own language called HCL (Hashicorp Configuration Language) to define the set of infrastructure to provision.

Azure is the cloud infrastructure offering from Microsoft. It allows you to consume infrastructure and other dev-related services as a service, on a pay-as-you-go model.

One of those services is called Key Vault, which is essentially a secure vault in the cloud. You can safely store sensitive information into a given Key Vault (such as passwords, certificates and such) and use them with other resources.

monolith meme long
monolith meme long

What is the alternative?

If a Monolithic Architecture is on one side of the spectrum, being Highly coupled and with Low Cohesion, on the other side of the spectrum there are Microservice-based Architectures.

What is Microservice-based architecture?

In essence, A Microservice Architecture is a method of dividing and building software as small, lightweight, distinctive, independent and message-enabled modules that are independently deployable.

Instead of building a single service or process that responsible for all of the functionality, with Microservice-based architecture, you divide the functionality into small services that are usually distributed and decoupled from each other.

Is Microservices that good?

Microservice-based architectures have an advantage like being:

  • Simple to deploy each service – each module can be deployed independently and gradually without impacting the rest of the system.
  • Simple to scale – each module can be scaled on-demand accordingly without affecting the rest of the system.
  • Simple to reuse and test – each module can be tested or reused independently.
  • Simple to update – it is easy to update modules as independent units.
  • Easy to modify – a modification to one module shouldn’t force changes to other modules.
  • Having flexible dev stack – each module can possibly be developed independently with different frameworks, programming languages, and technology.

When working with Microservice-based architecture it is common to have small teams working independently on different modules. This could be a big advantage, since each team can utilize their own particular set of skills using technologies of their choosing, without forcing other teams to use them as well.

In addition, since current trends are in favor of Microservices, a lot of the frameworks and cloud technologies being developed today have Microservices in mind. This means there are a lot of tools and frameworks that you can utilize while building your system, such as the power of the cloud.

Let’s start with the very basic – connecting Terraform and Azure. This is a one time process that will allow us to then connect securely to Azure from within Terraform every time we want to provision or modify Azure Infrastructure with Terraform

  • Lower operational costs
  • Lower total cost of operation (TCO)
  • Higher availability
  • Simpler shared resources
  • Increased user efficiency
  • Improved user experience
  • Lower total cost of operation (TCO)
  • Higher availability
  • Simpler shared resources
  • Higher availability
  • Simpler shared resources
I’m using Microservices, I’m so cool!

Where is the catch?

Everything comes with a cost. It is usually hard to develop a Microservice-based architecture.

  • It is much harder to deploy at first.
  • Much harder to debug a distributed system.
  • Serialization, Deserialization and round trips of messages impact performance.
  • Much harder to configure.
  • Much harder to monitor.
  • Much harder to orchestrate and manage a sizable amount of distributed services.
  • Adds complexity – more moving parts.

In contrast to Monoliths, configuring and monitoring a lot (might be tens or hundreds) of distributed services is a challenging task. There are tools and frameworks that are designed for this purpose, like Containers for hosting services and Kubernetes or Azure Service Fabric for monitoring, configuring, orchestrating and automating them.

As a result, when working with Microservices, a sizable portion of the effort is invested into DevOps and into working correctly with orchestration tools. If you are not aware of that, it could cost you.

These tools are usually a blessing and a curse. They do a great job, but since the task is complicated, so are the tools. They are additional technologies and tools that developers must learn and understand before they can productively work on your system.

Summary

Designing an Architecture for a system is hard. There are a lot of factors to consider and it needs both a touch of science and a touch of art.

A Monolithic Architecture isn’t popular these days and for good reasons. It is Highly coupled and has low cohesion, with all of the implications. That being said, there are still use cases where you should prefer a Monolithic architecture for your system.

Current trends tend to favor Microservice-based Architectures, and current cloud technologies are best utilized with Microservices. A Microservice-based architecture tends to be loosely coupled and has high cohesion, making it simpler to update, modify, test and scale, which in return requires a lot of configuration, monitoring, and orchestration, a task that is not that simple and requires a specific set of skills. Orchestration tools are a big part of a Microservice-based architecture.

It doesn’t matter on which side you are closer to in the Architecture spectrum mentioned above. Choose the Architecture that suits your needs. With Monoliths on one side and Microservices on the other side, your best choice might be somewhere in the middle.

Published by Moaid Hathot.

Software Consultant, OzCode Evangelist & Code Jedi

Image

Moaid Hathot |6 years

Native VS Cross-Platform – Everything you need to know

Welcome to the era of multi-platforms.

The term “Computer” doesn’t apply only to desktops anymore. It doesn’t matter what services your software provides, or whether it is a ‘website’ or an ‘app’, consumers will consume it using multiple platforms.

You need to target multiple platforms in order for your software to be relevant.

native-mobile meme
native-mobile meme

Hard decisions

We established the fact that you want (and need) to target multiple platforms. This introduces a set of difficult decisions.

  • Should you or even can you use a single code base for multiple platforms?
  • Will you have to implement the app from scratch for each platform you target?
  • Should you use the same UI in all of the platforms, or to retain the look and feel of each one?
  • What type of developers do you need? And how to divide them into teams; according to technology or according to the platform?

These are crucial and hard questions that have to be answered before starting a project. Unfortunately, every situation is unique. There isn’t a simple answer to all of them.

There are simply a lot of factors you need to consider before being able to make the right choice.

Let’s explore the options.

Native Mobile Applications

With the Native approach, you have to develop a version of the app for each platform, using platform-specific programming languages and dev tools. For instance, iOS apps are developed in Objective-C or Swift using XCode, Android apps are developed in Java using Android Studio and Windows-Platform apps are developed in C#, VB.Net or C++ using Visual Studio.

With Native apps, you can utilize platform-specific API’s and abilities, while ensuring maximum utilization of OS resources. This is one of the reasons that make Native Applications considerably more performant than other approaches.

Hybrid Mobile Applications

With the Hybrid approach, you use frameworks such as PhoneGap/Cordova and Ionic with web technologies like HTML, JavaScript, and CSS to target multiple platforms at once. They are considered Hybrid since they are packaged into platform-specific native apps that host them using a Web View.

Since the apps are built using regular web stack, you can leverage frameworks such as Angular, React and Vue.js while developing your app.

In addition, you are able to update parts of your app on the fly using a “hot code push” technique without the need to push updates to the App Store.

Cross-Platform Mobile Applications

With the Cross-Platform approach, you are able to use a single code base for multiple platforms, by using an intermediate programming language and frameworks such as C#/Xaml with Xamarin and JavaScript with React.Native or with NativeScript over frameworks that will be compiled into native mobile apps.

Apps that are developed using this approach are OS agnostic. They are able to use OS API’s using an abstraction provided for them by the framework. These abstractions are consistent among all of the platforms, which means they can only offer API’s that are common in all of the platforms – the greater common denominator.

In some cases, you are still able to access platform-specific API’s and customizations, such as animations, but in doing so you will add complexity and share less code between platforms.

Progressive Web Apps (PWA)

With the Progressive Web App approach, you will develop websites that are optimized for mobile and that offer experience similar to that of a regular app. These apps run using the browser and do not require installation. This means Progressive Web Apps do not exist in App Stores.

Since they run on the browser, they can utilize only the API’s that are provided by the browser and can not access OS level API’s.

Despite the fact they are essentially websites, Progressive Web Apps can still run offline. The functionality that will be provided depends on the app and on the services it can provide to the user.

Because they do not exist in App Stores, they will likely be less discoverable by consumers.

Performance

In terms of performance, Native apps are the obvious winner. This is due to the fact they do not require any intermediate framework to run (other than the platform’s SDK). In addition, they are able to use platform-specific API’s, granting them the ability to manage resources better.

UI/UX

There isn’t an obvious winner – it is a tie.

  • Native Apps – While it is easy for Native Apps to use platform-specific themes, animations, and controls, you will have to implement the same functionality for every platform. It is also challenging to keep a consistent UI for your application across platforms.
  • Hybrid Apps – While it can be considerably easy to design UI in HTML and JS (for those who are skilled with these technologies), since they are not native controls, they won’t be consistent with the look and feel of the platforms – users will probably notice the difference.
  • Cross-Platforms Apps – Since sometimes Platform-specific API’s are necessary to preserve consistency with the look and feel of each platform, like with animations, you might find yourself often deviating from the shared codebase to implement customizations that are platform-specific.
  • Progressive Web Apps – Since PWAs are basically websites, more often than not it will be obvious that they are not the regular app.

Organization & Training

Native Apps have fewer advantages. In contrast to the rest of the approaches, developers can’t reuse their skills and expertise between platforms, as each platform is built using different programming languages, frameworks, libraries, tools and technologies.

You might find yourself selecting an approach solely based on the skills of your developers.

developers-developers meme
developers-developers meme

Time to Market

Native Apps have fewer advantages. In contrast to other approaches, the app must be built from scratch for each platform, without the possibility to easily share the code base.

Your efforts are multiplied by the number of platforms targeted; Separate codebases, separate teams, and of course, more testing. The same process is repeated each time a new feature is being added.

Community

It is hard to declare a winner. Since the majority of apps are Native Apps, and since they have existed first, there are already a lot of information sources, articles and documentation regarding native app development for each platform out there. That being said, for Hybrid and WPA, both use web technologies, there is arguably a lot more information. The difference relies on information regarding the integration of these technologies with mobile platforms. The same goes for Cross-Platform apps.

Tooling

Native Apps have more advantages over the rest of the approaches. Each platform provides tooling that was specifically built and design for it, allowing them to be more precise and easy to use on that platform.

While there are great tools and extensions for Hybrid and Cross-Platform development, the experience is still lacking compared to tools for Native Apps. For instance, until recently, the Xamarin tools and extensions for Visual Studio were really hard to configure, and it might be a challenge for beginners to get started using those tools.

Summary

Traditionally, there were two main approaches for Mobile Application Development; Native and Hybrid. While Native were best suited for performance, Hybrid apps were easier and faster to develop.

Today, there is a broad spectrum of options, including Cross-Platforms frameworks and Progressive Web Apps.

Since there isn’t one that is best in all situations, when starting a new project, you must choose an approach carefully according to your needs and to your development staff.

Published by Moaid Hathot.

Software Consultant, OzCode Evangelist & Code Jedi