Simple, Universal User Experiences Using Visual Language.

This white paper is an overview of our current work and provides a snapshot of research and development, including some general descriptions of what constitutes our intellectual property. It does not include complete and/or detailed technical descriptions or discussions of our business model, financial projections or investor opportunities. It is a living document that is constantly subject to modification. For further information, please contact us at


The Costs of Poor User Experience

The recent explosion of information technologies has brought great benefits, but also increased the complexity of our lives. This complexity is primarily a function of the interface between people and technology. For example, one study found that the average mobile phone user touches the phone over 2,000 times every day; most of these interactions are necessary because of the design of our prevailing user interfaces. Another study says that the average American is reading a hundred text messages every day. A recent study by Jeff Sauro found that tasks done using software had a median completion rate of only 78%.

Every task in real-life has a natural, minimal set of steps that are required for its successful completion. When a task is done using technology, there is a corresponding minimal set of interactions that are needed. If a user has to do extra interactions - say, navigation and search when trying to call a friend - these interactions extract a complexity cost - cognitive, emotional and opportunity costs for the individual; and economic and social costs for the collective.

In recent years, there has been some growth in the field of research and development known as user experience (sometimes shortened to UX). However, despite the increasing profile of UX as a discipline, most enterprises drastically under-invest in UX and it remains a point of failure for many large IT projects.

It is difficult to quantify cost of poor UX but here are some rough statistics/examples - the IEEE estimated that $150 billion of IT development time is wasted due to poor UX; good Customer Experience (CX) leads to five times as much revenue growth as those with poor CX; the NHS in the UK wasted $20 billion on a project that did not meet patient needs; poor Customer Experience costs the financial industry $10 billion every year; poor wayfinding costs a Melbourne hospital $12 million every year etc.

Neither simple nor inclusive

In our own work over the last eight years in various industries, we have observed first-hand the many costs of poor UX in a number of contexts. Some examples -

The Problem

From a user's point of view, there are three problems with UX today. It is -

The Solution

There is a logical next evolution of UX that solves these problems - a user experience that uses a visual language. This is a structured grammar of symbols (ideograms and pictograms) coupled with a context-aware system that can drive the UX in time. It systematically makes UX better by making it -

A customer journey in Ping

NOTE: using the criteria in ISO/IEC 25022:2016, we can also assert that Ping has a positive impact on effectiveness, efficiency and satisfaction. TODO: mapping of Ping benefits to ISO/IEC 25022:2016

Our Vision

Making the use of technology simple, inclusive and seamless for customers, employees and citizens.

For eight years we have been using our unique visual methodology to simplify UX for enterprise and government, with >50M transactions by >200K users. We are passionate about the positive impact of simpler, more inclusive user experiences and we are excited to now be converting our successful methodology into a scalable platform.

Examples of our work for safety management

Ping is designed to be 'retro-fitted' onto existing experiences in all kinds of context such as a commuter journeys, retail customer experiences, employee inductions, civic interactions with government, etc. Our objective is to deploy Ping into common real-world user scenarios and drive at least 10 million user experiences per day by 2022.


Ping is a multi-disciplinary venture that synthesizes research and data from a number of disparate fields - visual design, linguistics/semiotics, psychology/cognitive science, information theory, user interface (UI) design, computer science, natural language processing etc.

Below we present a summary of some of the main trends, results and data that we use in this project.

Ping is a multi-disciplinary project

Why Visual Language?

In parallel with the growth of messaging, the use of visual symbols such as emojis has been increasing. Over 5 billion emojis are sent every day on Facebook Messenger alone. Instagram reported recently that over 40% of their posts had at least one emoji. 36% of millenials say that GIFs and emojis “convey their thoughts and emotions better than words” - when referring to emotions alone, that proportion rises to over two-thirds.

There is good reason for the growing popularity of `visual messaging - our brains are wired to process visuals more efficiently than text. We can identify images in as little as 13 milliseconds. The use of pictures leads to improved retention of information. Symbols are identified more precisely at a single glance and under suboptimal conditions such as distraction.

Visual messages also enable communication across linguistic and cultural boundaries and are accessible to low-literacy users. Symbolic signs are commonplace in globalized spaces such as a airports, hotels and international conferences and events - e.g. they are widely used at the Olympic Games. Many governments and public organizations use symbols to communicate important health and safety messages to diverse populations.

Currently, visual messaging is used primarily to provide non-verbal emotional cues that are usually absent from text messages (but so important to face-to-face conversation). As such, it is being used to augment text, not to replace it, and performing only one of the six functions of language.

The different vocabularies of visual language - emoticons, signs, icons etc. - are being developed in an ad-hoc manner with no underlying framework. We are the first to take a systematic, semantic and structured approach to visual language.

History of Pictograms

Early human writing used pictograms and ideograms to represent objects, activities and ideas. Egyptian hieroglyphs and early Chinese characters were sophisticated symbolic languages developed for a variety of functions. Whilst the Chinese language developed into a logographic language, elsewhere the advent of phonetic alphabets relegated pictorial language to specialized roles only such as use in religious rituals.

An interesting exception is the Dongba script which is still used by the small community of Naxi people in southwest China. It was originally used for religious purposes and has about 1,400 symbols which are mainly pictographic. It lacks a complete or effective grammar and consists primarily of nouns only. Although it has been used for writing documents and contracts, the main surviving use of the script is in some signage - although there exists a software package for generating symbols.

Over the last century, there have been a number of attempts made to develop symbolic languages for communication, closely linked to the academic field of semiotics. Early and canonical work was done by Neurath and Bliss. Blissymbolics is an abstract, stylized language with a focus on use by the language impaired. PictNet is a pictographic language for use by children. Picto Online is a functional visual language developed for people with cognitive difficulties.

Emojis and GIFs

I often think there should exist a special typographical sign for a smile — some sort of concave mark, a supine round bracket, which I would now like to trace in reply to your question. - Vladimir Nabokov

The explosive growth in the use of emojis has surprised many in the technology industry (not us). Emojis and GIFs are developing into a natural global language for the expression of tone and emotion. Although driven initially by usage amongst younger generations of users, they are now used across all age demographics. Some studies indicate that users of emojis report higher satisfaction with their quality of conversations.

The popularity of emojis is important to us as it confirms our intuitions about the utility of visual language in everyday life. The relative popularity of certain emojis is also a valuable source of information as it encodes a large number of user decisions made across a global user base.

Other Visual Language Platforms

There are two recent efforts at symbolic communication systems using mobile technology that deserve particular attention -

Visual Narrative

Previous attempts at building symbolic communication systems have simply strung symbols together sequentially. There are very few references to building either a visual narrative structure or syntactic/grammatical structure for combining symbols. However a simple visual grammar is already used in the composition of many signs in everyday use e.g. traffic signs, where representational pictograms are combined with abstract modifiers such as arrows in predictable ways. User interface design also has its own simple implicit grammar e.g. the use of the notification bubble overlaid on app icons.

On the other hand, there is a rich body of research on the narrative structure of comic books. We are particularly interested to learn from the structure of graphics stories that use no text. We will also draw upon the vast literature and body of work relating to narrative structure in film as well as cinematographic techniques, in particular representation of human figures, different types of shots etc.


The theory of natural semantic metalanguage proposes that there exists a minimal set of semantic primes that can be syntactically combined (into semantic molecules) to create universal communications between people. The theory was founded by Anna Wierzbicka of Australian National University and much of the recent work in this field has been done by Cliff Goddard of Griffith University. The work is concerned with verbal/written language and motivated in great part by a desire to translate meaning across linguistic boundaries, such as in the work in Minimal English.

Although there is debate on the connection between semantics and vision, we see an opportunity to draw upon NSM research and the concept of semantic primes/molecules/templates to inform our Semantic Tag Minimal Language (STML).

We also draw some inspiration from the pragmatics school of linguistics with its focus on language as communication within context. Specifically we note that ambiguity is generally resolved in communication through a shared knowledge of context. We also (loosely) draw on the idea of frame semantics when developing our UX Director schema.


There is an obvious and extensive overlap with the field of semiotics, and in general in the interests of reducing friction of adoption it makes sense for us to adopt/align ourselves with existing semiotic conventions when and where possible. However, we also consider whether existing semiotics adhere to our own symbol design principles and reject existing conventions where there is a substantial departure.

Game Design

The use of Finite State Machines as a game engine is particularly relevant to our development of our UX director. Games can be thought as FSM (or related classes) although they might not necessarily programmed as such (or on the other hand they might). If we consider the Ping user in analogy to the game player/character, we draw upon the rich and successful field of game design e.g. implementation in Unity, see also Twine.

State Machines and Automata

We observe there that although the state pattern is common in computer programming, there is a scarcity of literature directly relating FSM, HSM, pushdown automata etc. to the generation of UI/UX. Generally, where there has been work done linking state machines to UX, researchers have taken a very low-level mapping between the two (e.g. representing every UI element in the state etc.). This leads to intractably complex states/transitions.

Rather than using states to represent unique technical UI configurations, we use higher-level states that represent the state of the physical environment, psychological states for the user and internal system state. This is a different way of using FSM for UX which leaves the exact details of UI generation etc. to the particular client application.


The design of our Semantic Tag Minimal Language is partly inspired by the (now) ubiquitious use of hashtags and usertags on social media.

Instructional Design

We have used visual instructional design techniques extensively to date in our work, and we are particularly appreciative of the text-free design of IKEA manuals. Note also that such task-based instructional design is 'learning by doing' - i.e. proof of learning is in task completion, there is no separate assessment of learning.


Ping is a cloud technology platform that provides simple, inclusive and integrated user experiences using visual language as a service. It systematically bridges the gap between global business and customers and employees with a new layer that integrates easily with existing processes, IT systems, content, and hardware devices.

This might be considered the world's first User Experience As A Service(UXaS). TODO: elaborate on concept of a UXaS

Ping is a Visual UX as a Service


The design and development of Ping adheres to the following principles:


Design and development for Ping proceeds with an agile, iterative design methodology. This involves creating symbols based on the intuition, experience and research of our design team; releasing these symbols into the Ping system to audiences for testing and/or real-world use; collection of usage data and other feedback metrics; using these metrics to discard/promote symbols and modify symbol designs.

To support this methodology, we automatically measure usage patterns and community feedback and modify availability of symbols based on usage history. This operates on both a per-user and global scale, allowing for us to learn from both individual user preferences as well as from the global community.

We use an agile software development approach. We also make maximum use of existing open-source solutions whenever relevant (if they are available under a permissive software license).

As founders, we are very experienced in using these methodologies together, and have executed a number of projects in our work over the last eight years in the same way. Our methodology aligns with the best-practice in the technology industry.

System Design

Ping is built on three key innovations that constitute our intellectual property.

Ping system architecture


Ping symbol design is at the intersection of art, design, linguistics and psychology. It requires a multi-disciplinary skill-set and a highly iterative approach that uses theory, intuition and cycles of user feedback. The focus at all times is on fulfilling the core function of Ping - effective communication between people.

Sample of Ping symbols

Some of the salient principles of symbol design are -

NOTE: that the same principles apply when creating more complex 'infographics' - we consider these to be more detailed versions of Ping symbols and they must follow exactly the same guidelines.

TODO: add notes about symbol primitives/radicals. visual combinatorial methods, narrative methods etc.

Semantic Tagging Minimal Language (STML)

Semantic Tag Minimal Language (STML) is a proprietary tagging system that appends words with tags representing the function of the word within language (both grammatical and semantic). Some examples of the tagging conventions are:

Using STML, any real-world micro-communication can be expressed in a functionally equivalent minimal form. For example, the STML for "What would you like to drink?" is - #? !drink @you

Canonical sample of STML sentences

TODO: release more details of STML specification?

NOTE: for further details of the complete STML specification, please contact us at

Mapping from STML to Symbols

We automate the mapping of one or more STML tags to symbols. Symbols are delivered as Scalable Vector Graphics. In this way, all UX I/O can be done using STML alone, with the actual display of symbols being delegated entirely to the endpoint UI which uses the Symbol API to 'translate' between STML and symbols.

Sample of Ping symbols with STML mapping

Sample of more detailed 'infographic' Ping symbol

UX Director

The UX Director is an evolution of a software implementation that is already successfully used in the field by over 200K users and has to date collected more than 20M data points from user interactions.

It drives a user experiences that is efficient and effective whilst still being flexible; it is halfway between a form (rigidly scripted, no outlet for expression) and a chat (unscripted, unstructured and cognitively expensive).

Ping UX combines the best of process and chat

State Machine

Every event from the field at time t(i) is represented as an event vector E(i). This vector is a composition of state of the physical environment, psychological states for the user and internal system state, and also has an event name and a value payload.

E(i) = [event, value, state(i)]

The UX Director uses the input state and schema to performs its internal logic to do the state transition - i.e. mapping from one state to the next state in time:

state(i+1) = U(state(i))

The transition function U is a stochastic function that is a superposition of a number of possible transitions, each with a probability between zero and 1. The final state is computed as a "winner" state using a "voting" mechanism.

TODO: clarify and elaborate transition function

The next state is then sent back to the clients via one or more "triggered" events:

E(i+1, j) = [event(j), value(j), state(i+1)]

These event vectors E(i+1) completely specify options for the next user interaction for the intended recipient (either the original sender or a second person) - this output can also be sent to different platforms or locations. Thus for example a button push on a kiosk may lead to a push notification on the user's phone.

Values and states are represented as data structures that use STML. As an example -

state(0) containing {!#ok. /size? ##coffee} => state(1) containing {(!#small|!#medium|!#large) ##coffee}

TODO: release more details about implementation of UX Director?

UX Director as a State Machine

For any task scenario e.g. ordering coffee, the UX Director uses the schema to route actions, modify system state and present users with available options; users can find most likely options with minimal actions, while still having the freedom to select more advanced options to their preference or even enter free form communication. Another way to think about the UX Director is that (in collaboration with the user) it 'directs' the path of the system through the connected graph of all possible states towards the desired end state (usually a task completion).

For example, when performing a coffee ordering task, the schema may specify possible ordering paths to ask for sugar, milk, extra flavours etc. However, when the UX Director runs in situ, the previous history of a user's activity may be used to skip certain steps or pre-select defaults (e.g. one sugar, whole milk) and thus 'guide' the user to the most probable end state (regular coffee order). The user still has the ability to opt-out and take a less probable path and order something different.

UX as a path through the state graph

The probabilistic nature of the UX Director gives the user some of the 'freedom' of a chat but retains just enough of the heirarchy/structure of a menu/form.


The schema is the 'recipe' or program that runs the UX Director - a data representation of the state machine as described above, as a linked data structure of possible transitions organized with respect to tasks and frames. The schema is like the 'script' of adventure game, but not as prescriptive. To change the UX, we can simply modify and republish the schema.

Sample snippet from schema

TODO: release more details about schema structure?

NOTE: for further details of the UX Director, schema etc., please contact us at

Platform Connector

Our suite of technologies that can seamlessly synchronize experience between disparate devices and platforms - Touchpoint™ simultaneous touch, Metwork™ facial recognition; NFC, QR codes, Bluetooth etc.


Touchpoint transfers data securely, consensually and anonymously to any number of people within seconds - using our unique "simultaneous touch" cloud API. It works on any device and needs no set-up or sign-up. It is particularly suited for transferring data and UX between kiosks, mobile phones and digital signage. In these scenarios, competing technologies such as QR codes, Bluetooth, NFC etc. not able to provide a combination of anonymity, platform independence and one-to-many scale.

Touchpoint™ connects disparate devices

Touchpoint allows us to integrate the user experience along a journey. For example, from ticket ordering kiosk to journey planning to voice navigation on phone to tap off at destination.

To try out Touchpoint and read more, go to


Deployment of Ping for a particular client application involves the following steps -

Ping deployment process

Often, one or more of these deployment steps can be done by our partners - for example, configuration of kiosks to run the Ping UI through a browser container.


Ping is designed for any customer journey or employee experience that is fragmented, complex and/or not inclusive. Our target market is Fortune Global 500 enterprises, with a total of >28M employee experiences/day, >1B customer journeys/day and >$1T profit/year. Current clients are mostly large Australian enterprises.

We target verticals with diverse audiences e.g. hotels, travel, transport, logistics, training, health and safety, retail, mining, medical patient experience etc. Some applications are listed below -

TODO: elaborate and add more scenarios and description of current projects

About Us

As founders, we have always believed that technology should be simple, visual and intuitive. Over the last eight years, we have built a successful software business that uses icons, photos and infographics to simplify customer and employee experiences for enterprise and government. Our software interfaces have now handled >50M transactions by >200K users.

Our methodology, visual IP and software IP have gone through many rounds of iteration in response to customer/market feedback. We are now converting our successful methodology into a scalable platform. Ping is thus a natural evolution and culmination of our recent work.

We have a unique mix of creative and technical skills and a track record of innovation at scale that makes us the ideal team for this venture.

Luke working on Ping in our offices

Shourov Bhattacharya has been turning bold ideas into software since the age of twelve. He is an engineer with degrees in computer science and engineering from the University of Melbourne and a U.S. patent for his work building robots at the University of Delaware. He has worked at companies including Honeywell and Grey Advertising and designed software solutions at scale for clients including McDonalds, Ipsos Telstra and CSIRO.

He is also a musician who wrote and performed on an iTunes #1 album and has worked as a writer and multi-disciplinary researcher, publishing his work in journals and collaborating with universities and corporates.

Luke Feldman is a user experience and interface design specialist obsessed with visual, icon and instructional design. He has a background in visual arts, game design and animation and is a highly regarded artist with an international fan base. He is also an accomplished martial artist who has competed at international level.

Luke has worked in the U.S. at companies including Apple, Coca-Cola, Cartoon Network and Microsoft, where he led the UI design and marketplace development of their visual social network called ‘Wallop'.

As a team, we have a proven track record. For over eight years, we have created mobile software products together and successfully sold and licensed them to customers such as McDonalds, Telstra and BHP. Our products are now used daily by more than 200K users in 12+ countries.


For details of our business model, revenue and growth projections and investor opportunities, please contact us at



[1] “Messages Matter: Exploring the Evolution of Conversation”

[2] “Igwana: A Text Free Interface” (S. Bhattacharya, L. Feldman)

[3] “The Conservatism of Emoji” (L. Stark, K. Crawford)


[5] Rebus - Wikipedia


[7] “Making a Completely Icon-based Menu in Mobile Devices to become True: A User-centered Design Approach for its Development” (S. Schroder, M. Ziefle)

[8] “Effects of Icon Concreteness and Complexity on Semantic Transparency” (S. Schroder, M. Ziefle)


[10] “The Design, Understanding and Usage of Pictograms” (C. Tijus, J. Barcenilla, B. Cambon de Lavalette, J. Meunier)





[15] “For Mobile Messaging, GIFs Prove to Be Worth at Least a Thousand Words”

[16] “Persuasion and the Role of Visual Presentation Support: The UM/3M Study”

[17] “The Emoji Code: The Linguistics Behind Smiley Faces and Scaredy Cats”


[19] “Emojis: Insights, Affordances and Possibilities for Psychological Science”

[20] “Book from the Ground” (B. Xu)




[24] “A Study of WhatsApp Usage Patterns and Prediction Models without Message Content” (A. Rosenfeld, S. Sina, D. Sarne, O. Avidov, S. Kraus)


[26] “Traffic Analysis of a Short Message Service Network” (V. Tomar, H. Asnani, A. Karandikar, and P. Kapadia)

[27] “A Large Scale Study of Text Message Use” (A. Battestini, V. Setlur, T. Sohn)

[28] “The Sports Pictograms of the Olympic Winter Games”

[29] Dongba Symbols

[30] ICONji: Connecting the World

[31] Zlango’s Icon-based Language

[32] “Iterative process of design and evaluation of icons for menu structure interface of interactive TV services” (D. Lim, C. Bouchard, A. Aoussat)

[33] “The Loss And Revival of the Dongba Script”


[35] “An Icon Design Approach Based on Symbolic and Users Cognitive Psychology” (S. Qiang, H. Fei)

[36] “From Search to Icon Design: A Grammar For Visual Communication”

[37] “Alert Symbol Design”

[38] WordNet -

[39] “Iconic Visual Languages”

[40] “An Iconic Language for Graphical Representation of Medical Concepts” (Jean-Baptiste Lamy,corresponding author1 C. Duclos,A. Bar-Hen, P. Ouvrard and A. Venot)

[41] “The Categorial Structure of Iconic Languages” (J. Meuniere)

[42] “Information and Levels of Representation” (J. Meuniere)

[43] “Visual Representation from Semiology of Graphics”







[50] "Semantic Primitives in Language and Vision" (Y. Wilks)

[51] "What is Minimal English"

[52] "Semantic primes, semantic molecules, semantic templates: Key concepts in the NSM approach to lexical typology" (C. Goddard)

[53] Presage predictive text entry system


[55] Frame Semantics -

[56] "A New Mind Model of Communication" -

[57] FrameNet -

[58] Visualization of Concept in Medicine (VCM) -













[71] State Transition Diagrams

[72] Nondeterministic Finite Automata and Regular Expressions

[73] Probabilistic Automata