[Slides] DevConf 2018, Bengaluru : DesOps – Prepare Today for Future of Design (2018)

DesignOps is an approach to design inspired by the culture of DevOps. In this session, we will be touching upon the practical approaches on how to prepare for the next wave in design that compliments DevOps in the concepts of the cultural shift, collaboration, and automation. We will also see how to bringing the full circle of design in the context of software development lifecycle.

Topic: DesignOps – the Prepare Today for Future of Design!
Date & Time: 2:15PM – 2:45PM on 5 August 2018
Venue: Christ University – Bengaluru, India

Event: DevConf.in 2018 is the second free annual community conference sponsored by Red Hat for Developers, System admins, DevOps engineers, Testers, Documentation writers and other contributors to open source technologies. It is a platform for the local FOSS community participants to come together and engage in the knowledge sharing through technical talks, workshops, panel discussions, hackathons and such activities.

More info at: http://desops.io/2018/07/04/talk-at-devconf18-designops-prepare-today-for-future-of-design/

[Slides] Ditto – Design Life Cycle Management Concept for DesOps (2016-17)

You can read a related article Translating Value at Different Stages of Design with Minimal Waste here:
http://desops.io/2018/05/12/translating-value-at-different-stages-of-design-with-minimal-waste/Also associated video here :
http://desops.io/2018/05/12/video-ditto-design-life-cycle-management-concept-for-desops-2016-17/

[Video] Ditto – Design Life Cycle Management Concept for DesOps (2016-17)

You can read a related article Translating Value at Different Stages of Design with Minimal Waste  here: http://desops.io/2018/05/12/translating-value-at-different-stages-of-design-with-minimal-waste/

Translating Value at Different Stages of Design with Minimal Waste

In 1967 Ronald Barthe published Elements of Semiology, which stands as a temporal marker of post-structuralism, where he gave rise to the idea of “Meta Language”, which in fact “beyond language” or “Second-order language” which is used to describe, explain or interpret a “First order language”. Each order of language implicitly relies on a metalanguage by which it is explained. And it is obvious that the translations between the first order and the second order never loss-less and the true meaning of the first order language is lost or deteriorates when it is represented in “Second-order language”. Design process involving multiple intersections of information communication and sharing through translation of the value (e.g. vision, idea), loses its original meaning while being translated through different tools into various forms of expressions. Also, the different roles (e.g. Product stakeholder or product manager, information architect, interaction designer, visual designer, prototyper and developer etc. ) contribute to the infinite regression of “meaning” of the value.

That’s why one of the key factors that working with developers in terms of the realization of the design has always been a big pain and non-stop iterations, and this turning in favour of the term Full-stack Designer that shares the same spirits as that of the term Full-stack Developer.

(Download PDF from http://desops.io/2018/05/10/infographic-the-full-stack-roles-in-desops/ )

Typically the phrase “full-stack” (or full-stack developer) refers to someone who is someone comfortable and could single-handedly tackle every layer of software development. Typically DevOps engineers and full-stack developers share the same philosophical traits — they strive for greater agility and flexibility and hint at a trend towards greater generalization in the skillset of technical professionals. In case of the full-stack designer, he grows into a cross-disciplinary professional who can handle across Interaction Design, Visual Design as well as UI Development or prototyping. This helps to reduce the gaps between the outputs from these different and the effort that goes into the translation of concept flowing from one stage to another, where there are many roles assigned to more than one persons are involved (And remember that one of the key principles of DesOps is to remove waste by applying practices like Lean models).

So if we see that one of the major focus of DesOps in this aspect involving the work-culture is to reduce the gaps between the roles and wastage thereby. The translation from one role and discipline to another makes the meaning second-hand interpretation and such process is never loss-less. Something or the other gets lost in translation, as we progress towards the realization of the value or the product.

So by implementing Lean methodologies and by striving towards having minimally sized team members with as diversified expertized to help to minimize the intersections or touch points among disciplines and roles where the translation happens. In short less need to translate the value, the less information gets lost and less is the waste in terms of value, effort, cost and time.

Going “full-stack” is one approach to avoid the waste that happens in the process that Barthe has termed as indefinite regression or Aporia . This is mostly from the roles perspective. The other approach is through process or work practices redesign to reduce the number of intersections where the translation happens. In the context of DesOps, this is significant, as it involves the interactions at various levels between the primary two entities i.e. human (i.e.people or user ) and machine, namely — from human-to-human, human-to-machine, machine-to-human and machine-to-machine.

So one of the key factors involved while implementing the DesOps in the organization is to look for process re-design to “minimize the touch points of interaction” through the principle that advocates to minimize the gaps between the roles.

Sometime back, I was evaluating some of the design systems and tools available to study the pain-points while transitioning the design concept from one tool to another and moving from one role to the other. I got in touch with many of my friends who were designers by profession in different enterprises and were at different levels of comforts with existing design and wireframing toolsets, starting from Adobe Suite to the Sketch based eco-systems. Based on the discussions, the interesting thing that I found was many of the tools being used at different organizations, are selected based on the pricing and what the team is comfortable working with, rather than with a focus to have a seamless workflow. In many cases, the cost played a bigger role and due to licence cost, some had switched from most popular design tools to the cheaper replacements. However, the biggest struggle that was there was never solved i.e. the iterations in the design process remained challenging despite the change in tools. Every change in design aspect that was iterated was facing the overshoot of the effort, time and cost. The transitions of value from one set of tool to the other e.g. the quick wireframe created in Powerpoint during the stakeholder workshops were being translated into Photoshop was actually about relooking at the layout due to the constraint that popped up while working with the specific widgets in a specific resolution that never went well the wireframe concept. In one organization, the team replaced the Powerpoint with Sketch and tried to use that to complete the process, which turned into a nightmare as the sessions ended with the people struggling with the tool.

In search of a tool that can minimize the translations of values, I was toying with an idea of Ditto, a simplified version of the tool that can look familiar to the different roles involved in the process and at the same time it will align with a process that is about having the same source files at any of the stages of the process and can align with any design system with easy configuration mechanism.

(Figure : The common pain-points in typical Design-process  )

 

(Figure : Typical Design process based on various touch-points where translation happens )

Ditto is a conceptual vision of a design tool that uses UI pattern Library (which is configurable ) and understands the relationships among the components of the design system configured to create and maintain its own objects that can be rendered on a super simple easy-to-follow and familiar looking interface with drag and drop kind interactions enabling the users of different role to focus on their goal rather than figuring out how to use the tool. Conceptually Ditto was capable of taking any type of inputs and exporting different types of outputs on demand e.g. the wireframes, visual comps and ready to deploy UI coded with HTML/ CSS/JS.

(Figure : The conceptual solution, Ditto is based on a “Single-Source” based process )

(Figure : The conceptual solution, Ditto  is based on a “Seamless Workflow” based process )

 

Watch a video demo that elaborates the concept in context to these use-cases —

The benefit of such a tool would be to reduce the critical – dependency on any particular role in the team in order to carry out with the project. For example, assuming a startup is only having a visionary guy with a developer, he can go into the tool to drag-drop few shapes and objects in a traditional Powerpoint / Keynote kind of interface which he is familiar with and export that as a barebone interactive code/prototype that can be tested. He can continuously test and get feedback, based on which he can tweak the same source and iterate. His developer friend can use the same source to export the code in a click to use in the production. And interestingly if he makes any change, that would be updated back to the same source. Later if a visual designer’s help is taken to customize the look and feel, it would allow updating the same source that’s running in the production where the necessary changes to the code will be managed by Ditto in the background. And despite the fact that the visual designer modified the source, (which affects the internal code) he is doing that in a Photoshop kind of environment familiar to him.

If we look at this “Single Source Based Design Eco-System“, any role can enter and come out with a production-ready code, be it a product owner or CEO of the startup, or an interaction designer, visual designer or a UI developer.

The benefits of such a tool are many:

  • Seamless work-flow for Design – Developer collaboration
  • Saving on license cost as no longer it is required to use multiple design tools
  • Single source makes it maintainable.
  • The omini-change process ensures that any changes happening at any stage of design stage will automatically take effect on other stage deliverables. (i.e. assuming at if UI html code is tweaked, it will also reflect in wireframe and visual design stages without any extra manual work!)
  • Zero learning-curve
  • Single user – Single Tool: makes it easy for any of the roles can login and generate implementation ready UI output.
  • Significantly reduces Cosmetic / UI compliance issues
  • Significantly reduces the time to implementation of the design concepts
  • Boon for Agile projects where the changes are common.

Some of the components of the conceptual design tool Ditto, you might have noticed are already available in some of the design tools. For example, similar in Sketch, you can create a custom pattern library theme and use drag and drop to add them while you are designing. But, its more about customizing themes based on existing ones and exporting shapes specific CSS at the end, which is at a very lower level of maturity from a design system standpoint. Similarly, exporting existing design to HTML feature from Adobe’s Creative Cloud Extract in some sense does not take account into how certain design systems of high maturity can be fit into it so that the output can have functional interactions as a part of business flow for the UI.

But certainly, Bootstrap Studio is much closer to the concept of Ditto. It can be improved around the areas configurations and there is a need to move away from the traditional Application type UI and interaction layer so that it can align to one of the core attributes i.e. to make each user role feel familiar with the interface or rather making it easy to follow. The UI widget based WordPress-page builder tools like Elementor are closer in this regard. While talking about Eco-Systems the tools and technologies, I will write about what are the tool-chains that we can combine for different levels of DesOps implementation looking at the maturity levels of Design Systems in play in the organizations.

Well, the thoughts presented above is a practical example of conceptualizing a value communication using improvements in processes and using technology to applying regression to the touch points where the translation happens, and thereby reducing waste and avoiding loss of meaning of value throughout the design process. We will explore the aspect of communication aspect of human-to-human, that is an important part of DesOps culture. Stay tuned.

(c)2018 , Samir Dash. All rights reserved.

(This article is originally published at on May 12, 2018 at  https://www.linkedin.com/pulse/desops-next-wave-design-part-11-translating-value-different-dash/ )

Infographic: The Left-Right Brain Analogy for Incident Tracking in DesOps

 

Download the PDF version here: https://www.slideshare.net/MobileWish/infographic-the-leftright-brain-analogy-for-incident-reporting-in-desops

©2018,”Infographic: The Left-Right Brain Analogy for Incident Reporting in DesOps” by Samir Dash.
Creative Commons Attribution-Share Alike 4.0 International License

Infographic: The Ten Commandments of DesOps

 

Download the PDF version here: https://www.slideshare.net/MobileWish/the-ten-commandments-of-desops

©2018,”The Ten Commandments of DesOps” by Samir Dash.
Creative Commons Attribution-Share Alike 4.0 International License

The Evolution of Technology in the Context of Software Development & Design Process: Take-away from the First PatternFly Conference

[This post was originally published on Red Hat Developers, the community to learn, code, and share faster. To read the original post, click here.]

Last Sunday, I returned home, India, after attending a series of collaborative sessions in Raleigh, NC, with many designers and developers across Red Hat and the open-source community, at the UX Summit and the PatternFly Conference. The whole experience was inspiring, informative and at the same time thought provoking with many takeaways, out of which the most interesting for me was that cumulatively all the inspiring talks from the speakers of the conference were implicitly hinting towards a clue. How our attempt to solve the existing technical solutions also impact the existing work process and thereby demand a rethink on the process blocks we use.

Being part of the Red Hat UX Design team, it is always an exploration and giving value to the core principle of “being open” to the “Design” — this is something similar to how the “openness” is seen as the integral value of the “Development”.

“We approach user experience design in the same way we develop our products. It’s all about being open.”

– Jim Whitehurst, CEO & President, Red Hat

So, what’s so special about the design being open? Is it something different from how we design in a traditional enterprise organization? It’s natural to have such questions when we face the open paradigm of the design challenges — I had similar questions. One thing to note here is that designing for open-source has its own challenges that require a re-look at the existing design processes and frameworks especially in the software development scenarios that have been evolving over last two decades. With the focus to bring the user-centric focus to the existing sets of Software Development Life Cycle (SDLC) — these were mostly about advocating Human Factors in the perspective through which we have been used to see SDLC.

But as day-by-day major enterprises are going open-source to bring more and more collaboration with the community to develop the software, the open-source movement itself is getting mature by gradual evolution through different seasonings in the context of the growth of technology and the changing market needs that are driving shaping the SDLCs. For example with the rise of startups in last few years, the adoption of MVP-driven Lean Development models instead of traditional SDLCs has increased indicating the evolving dynamics of adoption of such processes.

In the traditional enterprise, the design process is more or less influenced by standard UX approaches that have evolved over the last couple of decades. Taking into account the SDLC defined towards the end goal of the organization in context to the user experience related benchmarks. Many take different components/tools and methodologies from the Design Thinking and other similar HCI process models, and align them into the SDLC, be it agile, waterfall or hybrid, and try to achieve the user-centered software development (UCSD) from an engineering point of view.

There are sects of organizations, who take into account the user needs related benchmark and make the existing SDLC components to align to that — be it through User-Centered Design (UCD) approach blended with a part of Iterative, Agile or Lean SDLC models or simply following a ready made Lean UX model. While everyone is attempting to solve the UX equation to come up with the best possible framework that works great in advocating user needs, the question is still unsolved — what UX approach works best in a diversified collaborative open-source culture?

In the very first PatternFly conference held on 8th of this month, in his keynote, Michael Tiemann, VP of Open-source Affairs at Red Hat, referred to continuous evolution in the software development domain in context to design. His expressed views, which resonated more with the belief that the sustainable design solution can evolve from an organic design process, where reusability can play an important role.

(Michael Tiemann, Vice President of Open Source Affairs at Red Hat, delivering the keynote at the very first PatternFly Conference at Red Hat Annex, Raleigh on 8th June 2017)

The PatternFly community project supported by Red Hat is an attempt in such a direction that promotes “design commonality and improved user experience”. Dana’s blog post, Are “Web Components” in the future for PatternFly?, is an interesting read in this context from a technical angle where he addresses the thoughts on building a UI framework when there are so many choices and so many strong feelings about the different choices? While every organization is still trying to find the solutions for challenges related to UI development for a better experience such as – how to structure and organize CSS code for reusability in the ever-growing complex world of UI framework choices like Bootstrap, Angular & React etc. It’s also important not to forget that with each adoption of UI framework the impact on how we implement experiences to our products is also evolving, thereby making an impact to the process blocks that run the design, and development testing phases of SDLC.

This leads us to the hypothesis that with the evolution of newer technology adoptions, we are bringing the change catalysts to our development process, which can influence how we conceptualize and deliver the best experience for any product we are building. Along with it, as SDLCs are now getting re-shaped due to needs like, “faster time to market” and to bring out more sustainable solutions for the business needs, so is the associated stages of “Design”, “Development” and “Testing”. These are also getting more mature through the focus on Continuous Integration (CI) and Continuous Delivery (CD); thereby bringing a sea of change to the way software was planned, designed, developed, and tested traditionally. To this model when we add the “Open” aspect that demands the need to improve the process dynamics in order to ensure the end-to-end software delivery is sustainable in coming times. Red Hat’s recent open initiative to announce OpenShift.io is an attempt that tries to address all these aspects, at least it is a “start” where we start re-thinking openly about the solution that we thought we already have.

Openshift.io as an end-to-end development platform tries to address multiple areas or the process blocks like Planning, Analysis, and Creation, that tries to answer one question – what is the best way to deliver a software product in the most effective, seamless and consistent way in a diversified work-culture with value and best experience possible. It would be interesting to see how this platform evolves in coming times with the support and collaborative effort from the open source community to address these process related change catalysts that are getting their way into the work practices due to an ever growing complexity of technology change.

In the forthcoming posts, I will be exploring this context with respect to existing UX models and methodologies try to reflect thoughts on “UX in the Open “. Stay tuned.

(c) 2017,  Samir Dash. All rights reserved

Lean Methodologies & Hypothesis-Driven Design (HDD)

The second life-line as pointed out in the dimension of ‘Culture’ is “Cultural Shift towards Lean Philosophies” (Refer the part 5 of the current series here https://www.linkedin.com/pulse/desops-next-wave-design-part-5-definition-3-dimensions-samir-dash/ ). In this part of the series, we will be deep diving for the same.

Typically any lean methodologies are based two basic goals:

  1. Helping the organization understanding customer value
  2. Using the key processes of the organization to continuously increase it and work towards delivering a perfect value to the customer through a perfect value creation process with zero waste.

In the context of Design and User Experience(UX), in the seminal book Lean UX by Jeff Gothelf, provides some lean methodologies break the stalemate between the speed of Agile and the need for design in product development lifecycle” [Jeff Gothelf and Josh Seidenjeff, The Lean UX]. If we observe closely, the basic foundations of Lean UX prescribed are common to the core Lean philosophies, namely:

  1. Remove Wastage from Design Process — Moving away from heavy documentation to a process that creates artefacts which can be directly used in the design and development lifecycle itself.
  2. Cross-functional Collaboration — All the stakeholders from any parts of the product lifecycle including the designer, developers, quality experts, analysts, marketers and end-users all collaborate in a transparent and productive way.
  3. Experimentation Based Iterative Execution — So fundamentally it alludes to the UCD core principles, that assumes that the designer (or the team) learns from the execution of the prototypes in an iterative cycle that starts early in the product lifecycle, from which the learning is used as an input to improve the product along the way. If you notice this is the very basis of what we talked about the continuous and integrated feedback loop.

The use of Agile and Design Thinking practices can be seen alluding to the philosophies above. However, regarding the third philosophy — to ensure that the designer has the right approach to conduct the experimentation so that the feedback can be used in a productive way to take informative decision making along the way across the design process requires some non-ambiguous methodologies that can help the designer making some assumption and validate that through experimentation. This is the basis of one of the core foundations of Lean UX and related lean methodologies, which is popularly known as Hypothesis-Driven Design (HDD). As any design that we produce is based on certain assumption and HDD provides the non-ambiguous approach to the same, that follows the third philosophy i.e. Experimentation Based Iterative Execution.

“Declaring assumptions is the first step, in Lean UX process” [Jeff Gothelf and Josh Seidenjeff, The Lean UX] . This statement reflects how the assumptions or the hypothesis is the core the Lean UX principles. Basically, the four steps in the Lean UX approach is the same blocks that appear while in HDD, and can be mapped to the following blocks:

  1. Making Assumption / Formation of Hypothesis
  2. Build a Prototype / Minimum Viable Product (MVP)
  3. Execute / Run the Prototype
  4. Get Feedback / Observation

Now step 4 is the feedback loop that connects to step 1 making a cycle, thereby using the feedback from current iteration as the input to form the assumption for the next iteration.

Interestingly this means the HDD process always depends on outcomes rather than traditional outputs of any traditional process. The outcomes are kind of hints from the segment that can give direction to the design along the way and help in getting incremental validation of the vision by acting as “evidence” or the and more precisely they are the factors that help in “course correction” for the design. As the outcomes of the assumption act as pieces of evidence, it helps in making data-driven decisions.

You may ask, does that mean the design is reduced to quantitative factors which will act as evidence to support some of the designs? Now, here is the interesting aspect that helps to understand the approach — the feedback or the evidence can be either quantitative or qualitative in nature. And to avoid the risk of being limited to measurement-driven design, the qualitative perspective is provided equal footing in HDD approach. This makes a fit case for DesOps that can help in converging the technology with this philosophy to form required service design solution powered by machine learning, artificial intelligence and automation. The aspect of the actual implementation using technology is what we will be exploring in coming articles of this series, mostly while discussing the third dimension i.e. technology.

The very first step in HDD, i.e. “Making Assumption / Formation of Hypothesis” involves articulation of the assumption or the hypothesis. And this is the most important aspect of HDD, as the other steps depend on this. Typical hypothesis statement can be framed as per the following simplest form —

I think doing [this] will result in [that ].

There multiple variations to this that many professionals use in their area for HDD based research and design /development activities. Some add the factors such as the segment or the persona or the end user of the product, something like the following —

For [user / him/her ] I think doing [this] will result in [that ].

If you see the above keeps segment in a context that helps it keep focused on to specific persona, which thereby can lead to the user-centred design process. This is essentially labelled by many as a design hypothesis.

Some use additional factors of forming negation of the hypothesis by adding an attribute like whether this hypothesis is believed by him as true or false. To formulate a null hypothesis and thereby move on to define an alternate hypothesis (which is essentially a negation to the null hypothesis one craves to validate, but due to nature and structure or any metrics associated with that, it may not practically be possible to achieve that. So in such kind of cases the implicitly the state of belief is presented as another factor to take part in the validation process —

Because I think [this] is true, I think doing [this] will result in [that ].

I believe that any hypothesis statement can be cultivated from the bare-bones model of cause and effect being rendered as a formalized statement of “if …. then” or “when … then” kind of format. At least this gives a more practical approach to build a real-world HDD based system which is technically feasible and is the essence of any DesOps system.

The next important aspect in this regard is how to ensure that we are considering the qualitative aspect of the evidence, as discussed above. If we look at irrespective of the fact that there is a crowd of applications out there which claims to support the business through a data-informed approached, they are primarily a set of some variation between the extremes like A/B testing tools and data-driven analytics and data modelling based prediction systems. But if we notice with care at the core of HDD we found that the essence of HDD lies in making sense of the data and treat it as evidence so as to inject it to a UCD based process to fuel design or business decisions. Though coming up with a real-world HDD system would require similar tools and technologies of data-driven systems, but they will be supporting components to HDD based system. In this regard, the real HDD based full fledged system is literally non-existent as of today, while I am writing this article. In regards to DesOps, this makes a lot of difference, as most of the HDD frameworks out there are not 100% autonomous. Most of the implementation of HDD is mostly based on the work practices and methodologies, which is implemented depending on the need and feasibility. However, these are good enough when we use these, in context to the tools we gain from UCD, Lean models and the Design Thinking principles. But, it is interesting that the philosophies of DesOps, take it to the next level, making it autonomous and integrated with the over-arching feedback loop we have been referring many times.

To be continued… Keep tuned in.

 

(c) 2018, Samir Dash. All rights reserved.

Beta Testing in the Ever Changing World of Automation

(Apart from some details about my prototype in Beta Testing, this also explores the interesting definitions from ISO regarding quality that brings “reviews in context” that fundamentally lauds usability philosophies. (Originally first published at RedHat Blog in January 2018)

Beta testing is fundamentally all about the testing of a product performed by real users in a real environment. There are many tags we use to refer to the testing of similar characteristics, such as User Acceptance Testing (UAT), Customer Acceptance Testing (CAT), Customer Validation, and Field Testing (more popular in Europe). Whichever tag we use for these testing cases, the basic components are more or less the same. To discover and fix potential issues, this involves the user and front-end user interface (UI) testing, as well as the user experience (UX) related testing. This always happens in the iteration of the software development lifecycle (SDLC) where the idea has transformed into design and has passed the development phases, while the unit and integration testing has already been completed.

The beta stage in the product lifecycle management (PLM) is the ideal opportunity to hear from the target market and to plan for the road ahead.

As we zoom into this phase of testing it has a board spectrum of ranges. On one side is the Front-End or UI related testing involving UI functionality (Cosmetic, UI level Interaction and Visual look and feel). At the other end is the User Experience (UX), including user testing involving more from A/B (split) testing, hypothesis, user behavior tracking and analysis, heat maps, user flows and segment preference study or exploratory testing and feedback loops.

Beta testing relies on the popular belief that goodness will prevail, which defines the typical tools that carry out such tests. Examples are the shortening of beta cycles, reducing the time investment, increasing tester participation, improving the feedback loop and visibility, etc.

The Importance of Beta Testing

If we dive deeper into the factors behind the existence of tools from different angles, we find two major reasons that advocate the importance of beta testing:

1. Left-right brain analogy, which points to the overlap between humans and technology.

The typical belief is that the left-hand side of the brain mostly processes the logical thinking, while the right-hand side is more about the emotional thinking. Based on this popular analogy, when we map the different aspects involved in different stages of SDLC for a digital product across a straight line from left to right (refer to the diagram below), we will notice the logical and more human-centered aspects are divided by an imaginary line from the center. We will also notice the gradual progression of the emotional index for the components from left to right.

When we map these to the beta testing phase, we notice these right-hand components are predominant. As users of the products (like humans), we are more emotionally connected to such aspects of the product, which are validated or verified in a beta test. This makes beta testing one of the most important testing phases in any SDLC.

Another interesting point to note is that when we look from the traditional software approach to define “criticality”, the areas that are tested during user acceptance testing (UAT / Beta), mostly fall into Class 3 and 4 type of criticality. But because these touch the core human aspects, they become more important. This Youtube video (Boy Receives Enchroma Glasses From Father) helps to illustrate the importance of technology touching the human emotions. It was posted by a popular brand of glasses to demonstrate how they can correct colorblindness in real time for the end user.

Interestingly, this is an aspect of “accessibility”, which is typically covered during a beta test. Considering this video, the aspect of accessibility naturally raises the question about what we can do for this father and son as a tester, as a developer, or as a designer? And when we look at the stats, we find that the number of people the accessibility impacts is huge. One in every five people is challenged by some kind of disability. And unfortunately, some reports indicate that a majority of websites do not conform to the World Wide Web contortion’s (W3C) accessibility guidelines (in 2011 almost 98% of websites failed the W3C’s accessibility guideline).

This helps to demonstrate the human angle, which advocates why beta testing is important to ensure these aspects are validated and verified to ensure the target user needs are completely met.

2. A second reason for advocating the importance of beta testing is evaluating it from the international standards perspective (ISO/IEC 9126-4). This helps to define the difference between usability and quality.

The International Standards Organization (ISO) has been focused on the standards around quality versus usability over time. In 1998 ISO identified efficiency, effectiveness and satisfaction as major attributes of usability. In 1999 a quality model was proposed, involving an approach to measure quality in terms of software quality and external factors. In 2001 the ISO/IEC 9126-4 standard suggested that the difference between usability and the quality in use is a matter of context of use. ISO/IEC 9126-4 also distinguished external quality versus internal quality and defined related metrics. Metrics for external quality can be obtained only by executing the software product in the system environment for which the product is intended.

This shows that without usability/human computer interaction (HCI) in the right context, the
quality process is incomplete. The context referred to here is fundamental to a beta test where you have real users in a real environment, thereby making the case of the beta test stronger.

Beta Testing Challenges

Now that we know why beta testing is so very critical, let’s explore the challenges that are involved with a beta stage.

Any time standards are included, including ISO/IEC 9126, most of these models are static and none of them accurately describe the relationship between phases in the product development cycle and appropriate usability measures at specific project milestones. Any standard also provides relatively few guidelines about how to interpret scores from specific usability metrics. And specific to usability as a quality factor, it is worth noting that usability is that aspect of quality where the metrics have to be interpreted.

When we look at popular beta-testing tools of today, the top beta testing tools leave the interpretation to the customer or to the end-user’s discretion. This highlights our number one challenge in beta testing, which is how to filter out pure perception from the actual and valid issues and concerns brought forward.

As most of the issues are related to user testing (split testing and front-end testing), there is no optimized single window solution that is smart enough to handle this in an effective manner. Real users in the real environment are handicapped to comprehend all the aspects of beta testing and properly react. It’s all a matter of perspective and all of them cannot be validated with real data from benchmarks or standards.

The World Quality Report in 2015-16 Edition indicated that expectations from Beta testing is changing dramatically. It hinted that the customers are looking for more product insights through a reliable way to test quality and functionality, along with the regular usability and user testing in real customer facing environment.

It’s not only the Beta Testing, but more user-demand is also impacted by the rising complexities and challenges due to accelerated changes in the technology, development, and delivery mechanisms. The 2017-18 World Quality Report states that the test environment and test data continue to be an achilles heel for QA and Testing.   The challenges with testing in agile development are also increasing. There is now a demand for automation, mobility, and ubiquity, along with smartness to be implemented in the software quality testing. Many believe that the analytics-based automation solutions would be the first step in transforming to smarter QA and smarter test automation. While this true for overall QA and testing, this is also true for Beta Testing, which deals with the functional aspect of the product.

Let’s see where we stand today. If we explore popular beta testing solutions, we will get a big vacuum in the area where we try to benchmark the user’s need for more functional aspects, along with the usability and user testing aspects. Also, you can notice in the diagram below that there is ample space to play around with the smart testing scenario with the use of cognitive, automation, and machine learning.

(Note: Above figure shows my subjective analysis of the competitive scenario.)

Basically, we lack “smartness” and proper “automation” in Beta Testing Solutions.

Apart from all these, there are some more challenges that we can notice if we start evaluating the user needs from the corresponding persona’s viewpoint. For example, even when assuming that the functional aspect is to be validated, the end-user or the customer may have an inability to recognize it. The product owner, customer, or end-user are part of the user segment who may not be aware of the nuts and bolts of the technology involved in the development of the product they are testing to sign it off. It’s like the classic example of a customer who is about to buy a second-hand car and inspects the vehicle before making the payment. In most of the cases, he is paying the money without being able to recognize “What’s inside the bonnet!”. This is the ultimate use-case that advocates to “empower the customer”.

So, how do we empower the end user or the customer? The tools should support that in a way so that the user can have his own peace of mind while validating the product. Unfortunately, many small tools which try to solve some of those little issues to empower the user are scattered (example: Google Chrome extension that helps to analyze CSS and creates the report or an on-screen ruler that the user can use to check alignment, etc.). The ground reality is that there is no single-window extension/widgets based solution available. Also, not all widgets are connected. And those which are available, not all are comprehensible to the customer/end-user as almost all of them are either developer or tester centric. They are not meant for the customer without any special skills!

When we look at the automation solutions in testing as part of much Continous Integration (CI) and Continuous Delivery (CD), they are engaged and effective in mostly “pre-beta” stage of SDLC. Also, they require specific skills to run them. With the focus on DevOps, in many cases, the CI-CD solutions are getting developed and integrated with the new age solutions looking at the rising complexities of technology stacks, languages, frameworks, etc.. But most of them are for the skilled dev or test specialists to use and execute them. And this is something that does not translate well when it comes to Beta testing where the end-user or the customer or the “real user in real environment” are involved.

Assuming we can have all these automation features enabled in BETA, it still points to another limitation in the existing solutions. It’s because the employment of automation brings in its own challenge of “information explosion”, where the end user needs to deal with the higher volume of data from automation. With so much information, the user will struggle to get a consolidated and meaningful insight of the specific product context.   So what do we need to solve these challenges? Here is my view — we need smart, connected, single window beta testing solutions with automation that can be comprehensible to the end-users in a real environment without the help of the geeks.

For sometime since a last few years, I have been exploring these aspects for the ideal beta testing solution and was working on the model and a proof of concept called “Beta Studio”, representing the ideal beta testing solution, which should have all these —  Beta-Testing that utilizes data from all stages of SDLC and PLM along with the standards + specs and user testing data to provide more meaningful insights to the customer.  Test real applications in real environments by the real users. Customer as well as end-user centric. Test soft aspects of the application — Usability, Accessibility, Cosmetic, etc..  Be smart enough to compare and analyze these soft aspects of the application against functional testing data.

Use machine-learning & cognitive to make the more meaningful recommendation and not just dump info about bugs and potential issues. Here is an indicative vision of Beta Studio:

Mostly this vision of the ideal beta testing solution touches upon all the aspects we just discussed. It also touches upon all the interaction points of the different personas; e.g. customer, end-user, developer, tester, product owner, project manager, support engineer, etc.. It should cover the entire Product Life Cycle and utilize automation along with the machine learning components such as Computer Vision (CV) and Natural Language Processing (NLP). Then gathering this information to be processed by the cognitive aspect to generate the desired insights about the product and recommendations. During this process, the system will involve data from standards and specs along with the design benchmark generated from the inputs at the design phase of the SDLC, so that meaningful insights can be generated.

In order to see this vision translated into reality, what do we need? The following diagram hints about the six steps we need to take.

  1. First we should create the design benchmark from the information at the design stage that can be used in comparing the product features against metrics based on this design benchmark.
  2. Then automate and perform manual tracking of the product during runtime in real time and then categorize and collate this data.
  3. This involves creating features to support the user feedback cycle and user testing aspects (exploratory, split testing capabilities).
  4. Collect all standards and specifications on different aspects — e.g. Web Content Accessibility Guideline (WCAG) Section 508, Web Accessibility Initiative Specs ARIA, Design Principles, W3C Compliance, JS Standards, CSS Standards & Grids, Code Optimization Metrics Error codes & Specs, Device Specific Guidelines (e.g. Apple Human Interface Guideline) etc.
  5. Build the critical components such as Computer Vision and Natural Language Processing units which would process all the data collected in all these stages and generate the desired metrics and inferences.
  6. Build the unit to generate the model to map the data and compare against the metrics.
  7. The final step is to build the cognitive unit that can compare the data and apply the correct models and metrics to carry out the filtering of the data and generate the insights which can be shared as actionable output to the end-user/customer.

While experimenting for BetaStudio, I have explored different aspects and built some bare bone POCs. For example, Specstra is a component that can help create Design benchmark from design files.

With Specstra I was trying to address the issues related to the cosmetic aspect or visual look and feel. Typically when it comes to cosmetic issues, more than 30% are non-functional and mostly cosmetic. There is no reliable solution that helps in benchmarking these kind of issues against specific standards. One third of the issues found during the beta / UAT stages of testing are mostly cosmetic and alignment issues, including category 3 and 4 types. And these happen mostly because the two persona’s involved (developer and designer) have their own boundaries defined by a mythical fine line.

When a developer is in focus, roughly 45% of them are not aware of all the design principles employed or the heuristics of UX to be employed. Similarly, half of the designers are not aware of more than half of the evolving technological solutions about design.

And in 70% of the of the projects, we do not get detailed design specs to benchmark with. Detailing out a spec for design comes with a cost and required skills. In more than two-thirds of the cases of development there is the absence of detailed design with specs. Many of the designs are not standardized and most of them do not have clear and detailed specs. Design is carried out by different tools so it is not always easy to have a centralized place where all the designs info is available for benchmarking.

Specstra comes in handy to solve this. It is an Automation POC that is more like a cloud-based Visual Design Style Guide Generator from the third party design source files. This is a case where the user would like to continue using his existing Design tools like Photoshop/Sketch or Illustrator, PDF etc..

You can view the video of the demo here:

View the demo of the BetaStudio POC here:

I understand that reaching the goal of an ideal beta testing solution might require effort and will likely evolve over time. Rest assured, the journey has started for all of us to connect and explore how to make it a reality.

Originally published at the RedHat Blog at https://developers.redhat.com/blog/2018/01/05/beta-testing-automation/

 

 

 

 

An Experiment for Automating Design Specs

This post talks about one of my weekend experiments, back in 2013, to explore the possibilities of automation to create visual design specification documents. (Originally published in March 2017 at Design at IBM Blog and other sites. )

‘Automation’ is the buzz now in every nook and corner of the tech industry. When we try to resolve productivity issues involved in design domain that is now powering Design driven Software Development Life Cycles (SDLC), we always end-up with a little innovation and that’s what the story of ‘Specstra’, is all about.

Automation is one of the trending phenomenon of the current technological world. Manufacturing industry has been pioneering it for decades. Other verticals are following it. In software development, testing and deployment field, automation is popularized by Dev OPS and Continuous Integration (CI) which help speed-up the software development life cycle (SDLC). But the SDLC includes the design process components along with the technology driven ones.

In today’s world, while design is getting more and more recognition across the entrepreneur world and many industry efforts like ‘IBM Design Thinking’ and similar frameworks, trying to create a synergy between the ‘Agile’ approach of SDLC and the “Design Thinking”. It is an interesting crossroad in time where the next big thing in automation is to, bring it to the creative process. In cthe ontext of Software industry ,I always see “Design” as an intersection between creativity and the technology where both shape each with the help from user-needs and blending of these results into successful products. This also is the reason automating designing process is a lot more challenging than building solutions for automation of purely technology driven process.

Few years back, at an R&D center of a leading mobile brand in Bangalore, I was part of a creative team, where almost 70% of the crowd were visual designers. Many of the creative crowd, complained about the design process that involved creation of style guide of the apps that they were working on. Every app-project used to be developed for different flagship phones models with different resolutions as well as screen densities. And being developed in native languages for Android view-ports, designer used to develop each style guide for each project separately for each model of phone. Each style-guide has to be detailed to pixel level which the designer has to calculate and define taking calculation of the view port Pixel Density (PD). Many designers have to maintain different versions of the mock-up and the create specs for each version, which was more like a “drafts man’s job” with lesser creative moments for expressions and innovation than the previous phase where the designer has to follow the wire-frame and come up with pixel perfect mock-ups of the app screens.

Almost all the designers tried to grab their hands on the creative part of the job, getting engaged with the stake-holders and the designers preferred to avoid working on the style-guide, though they would love to review one. The ‘not-so-seniors’ worked on the drafting of the guide and churned out the specs document, yet do the crying that it is less creative even though it is one of the most critical part of the design process. On calculating how much effort we are giving to a creative phase of creating a visual mock-up vs. a drafting work, it turned out that roughly on average, one view of a single screen to be mocked up in something around 4–8 hours. Creation of a very detailed spec. might need 4–6 hours of job. But if it is designed for multiple view-ports of an operating system with significant pixel density change along with varied resolutions, then this drafting time gets multiplied. So by creating 4 generations of phone models running different generation of Android might need 16–24 hours. So the designer actually takes roughly one week of work for a view in this case from wire-framing stage to finished design with specs ready for the developer. Averagely an app can have 10 views, so the whole app would need approximately a month of work to be designed and be ready for 4 different models. Even though this is a very high level bare-bone calculation, it indicated a few things –for the designer 1/4th of his work remains creative whereas the remaining 3/4th of his time is a purely drafts-man job, leading to an unsatisfying job experience.

Similarly for the organization, they are is paying higher fee for lower type of work, as the typical higher salary of the designer is spent in that 3/4th of the lower type less creative work. Moreover, even though the 3/4th of the job is lower profile job, which could have been automated, consumes more from the delivery time. if we look at the timeline of the delivery of the design deliverables, we see that 1/4th of the delivery time is actually spent in creative way. So actually if there is a scope to automate the low profile manual work, where the designer does not need to use his right brain, then the deliverables could have been delivered in just 1 week instead of a month! Also note that time is money for industry, so the organization is actually spend 300% more than it should and that too on a higher price point. Again, apart from this there are other factor that contributed to above problem. Being in a world of rapidly changing requirements, many industry are following “Agile” or “Iterative” approach of work. Which means in the short notice things can change even to the look and feel and UI aspect which would mean a change to the style guide if view of standard control lines are affected. This has a cascading effect that flows through the style guide work. So any change in such requirement means the wastage of effort and addition of new efforts to keep the specs aligned.

(Fig — ‘Specstra’ address many of the typical automation pain-points when it comes to generate visual design style guide)

Again, apart from this there are other factor that contributed to above problem. Being in a world of rapidly changing requirements, many industry are following “Agile” or “Iterative” approach of work. Which means in the short notice things can change even to the look and feel and UI aspect which would mean a change to the style guide if view of standard control lines are affected. This has a cascading effect that flows through the style guide work. So any change in such requirement means the wastage of effort and addition of new efforts to keep the specs aligned.

Specstra is a pet prototype that I had started working on, 3 years back (around 2013) to explore a possible solution for design related automation process. The user can upload Adobe Photoshop ( PSD), Adobe Illustrator (.AI), or PDF formatted exported from any design tool (Corel draw, Paint etc.) and within minutes Specstra can generate style guide which other wise would have taken the user days to complete and that with prone to error.

(Fig — The printable visual style guide generated within minutes from ‘Specstra’ comes handy for developers to make a pixel perfect UI)

Based on these findings, in 2013, I had started working on a pet experiment Specstra, to explore a possible solution for design related automation process specially creation of the visual style guides out of different design file formats. Basically, my focus was to automate the blocks that were more aligned to less creative activity so that these blocks can be removed from the creative process flow. As of today, Specstra become one of the proof of concept software of its kind where the user can upload Adobe Photoshop (.PSD), Adobe Illustrator (.AI), or PDF formatted exported from any design tool (Corel draw, Paint, InDesign etc.) and within minutes it can generate style guide which otherwise would have taken the user days to complete and that with chances of having human error.

(Fig — It takes a minute for ‘Specstra’ to generate a print-ready 100 pages long style guide for any submitted design)

Basically, it proved that the typical designer pain-points like tedious manual process, higher consumption of effort, time and cost of coming out with style guides can be removed with automation. Bigger than that, this experiment proved that automation is possible in design creative process!

[ Originally published on March 4, 2017, at Design at IBM ]