UIUX Module 5
UIUX Module 5
UIUX Module 5
Advanced Filtering
Following are the advanced filtering procedures −
The following points are the five-phase frameworks that clarifies user
interfaces for textual search −
FORMULATION:-
The fourth stage in the ISP, formulation, is the turning point of the ISP, when
feelings of uncertainty diminish and confidence increases. The task is to form
a focus from the information encountered.
Formulation, when a focused perspective is formed and uncertainty
diminishes as confidence begins to increase.
Initiation of Action:-
Consumer behaviour is the study of individuals, groups, or organisations and all the
activities associated with the purchase, use and disposal of goods and services.
Consumer behaviour consists of how the consumer's emotions, attitudes,
and preferences affect buying behaviour. Consumer behaviour emerged in the 1940–
1950s as a distinct sub-discipline of marketing, but has become an interdisciplinary social
science that blends elements from psychology, sociology, social
anthropology, anthropology, ethnography, ethnology, marketing,
and economics (especially behavioural economics).
The study of consumer behaviour formally investigates individual qualities such
as demographics, personality lifestyles, and behavioural variables (such as usage rates,
usage occasion, loyalty, brand advocacy, and willingness to provide referrals), in an
attempt to understand people's wants and consumption patterns. Consumer behaviour
also investigates on the influences on the consumer, from social groups such as family,
friends, sports, and reference groups, to society in general (brand-influencers, opinion
leaders).
Research has shown that consumer behaviour is difficult to predict, even for experts in
the field; however, new research methods, such as ethnography, consumer
neuroscience, and machine learning[1] are shedding new light on how consumers make
decisions. In addition, customer relationship management (CRM) databases have
become an asset for the analysis of customer behaviour. The extensive data produced
by these databases enables detailed examination of behavioural factors that contribute to
customer re-purchase intentions, consumer retention, loyalty, and other behavioural
intentions such as the willingness to provide positive referrals, become brand advocates,
or engage in customer citizenship activities. Databases also assist in market
segmentation, especially behavioural segmentation such as developing loyalty segments,
which can be used to develop tightly targeted customised marketing strategies on a one-
to-one basis..
Review of Results:-
Refining data consists of cleansing and shaping it. When you cleanse data,
you fix or remove data that is incorrect, incomplete, improperly formatted, or
duplicated. And when you shape data, you customize it by filtering, sorting,
combining or removing columns, and performing operations.
As you manipulate your data, you build a customized Data Refinery flow that
you can modify in real time and save for future re-use. When you save the
refined data set, you typically load it to a different location than where you
read it from. In this way, your source data can remain untouched by the
refinement process.
Prerequisites
Refine your data
Data set previews
Data Refinery flows and steps
Prerequisites
Before you can refine data, you’ll need a project.
REFINEMENT:-
Refine your data
Access Data Refinery from within a project. Click Add to project, and
then choose DATA REFINERY FLOW. Then select the data you want
to work with.
Alternatively, from the Assets tab of a project page, you can perform
one of the following actions:
Tip: If your data doesn’t display in tabular form, specify the format of
your data source. Go to the Data tab. Scroll down to the SOURCE
FILE information at the bottom of the page. Click the Specify data
format icon.
On the Data tab, apply operations to cleanse, shape, and enrich your
data. You can enter R code in the command line and let autocomplete
assist you in getting the correct syntax.
When you’ve refined the sample data set to suit your needs, click the
Run Data Refinery flow icon in the toolbar to run the Data Refinery flow
on the entire data set.
By default, Data Refinery uses the name of the data source to name
the Data Refinery flow and the target data set. You can change these
names, but you can’t change the project that these data assets belong
to.
In the Data Refinery flow details pane, optionally edit the name
of the Data Refinery flow and enter a description for it. You can
also add a one-time or repeating schedule for the Data Refinery
flow.
In the Data Refinery flow output pane, optionally edit the name
of the target data set and enter a description for it. You can save
the target data set to the project, to a connection, or to a
connected data asset. If you save it to the project, you can save
it as a new data asset (by default) or you can replace an existing
data asset. To save the target data set to a connection or as a
connected data asset or to replace an existing data asset,
use Change Location.
For the Update and Upsert options, you’ll need to select the
columns in the output data set to compare to columns in the
existing data set. The output and target data sets must have the
same number of columns, and the columns must have the same
names and data types in both data sets.
The dataset
Whatever the manner of storing the fixed list, the list must
contain the following kind of example:
Done.
Here we’ll simply send back the search results and the
generated list of 5 facet keys with their respective values.
The response should include whatever the front end needs
to build its search results page. In all facet search query
cycles, the front end needs:
Results:
Facets:
Next, render the data. As this can take many forms, and
this article is not strictly a tutorial, take a look at
our dynamic faceting GitHub repo for a complete front-end
implementation.
Making the solution more robust
You’ll want to let your users know how many records have
a given facet value. Getting the number of facets is useful
because they inform users about the search results. For
example, it is useful to know that there are more short-
sleeved shirts than long-sleeved shirts. These numbers of
facets are normally calculated in the back end during query
execution.
Going one step further, it’s also useful to include the type of
attribute:
Command languages
Multimedia search enables information search using queries in multiple data types
including text and other multimedia formats. Multimedia search can be implemented
through multimodal search interfaces, i.e., interfaces that allow to submit search
queries not only as textual requests, but also through other media. We can distinguish
two methodologies in multimedia search:
Metadata search: the search is made on the layers of metadata.
Query by example: The interaction consists in submitting a piece of information
(e.g., a video, an image, or a piece of audio) for the purpose of finding similar
multimedia items.
Metadata search
Search is made using the layers in metadata which contain information of the content of
a multimedia file. Metadata search is easier, faster and effective because instead of
working with complex material, such as an audio, a video or an image, it searches using
text.
There are three processes which should be done in this method:
Query by example[edit]
In query by example, the element used to search is a multimedia content (image, audio,
video). In other words, the query is a media. Often, it's used audiovisual indexing. It will
be necessary to choose the criteria we are going to use for creating metadata. The
process of search can be divided in three parts:
Generate descriptors for the media which we are going to use as query and the
descriptors for the media in our database.
Compare descriptors of the query and our database’s media.
List the media sorted by maximum coincidence.
Image search: Although usually it's used simple metadata search, increasingly is
being used indexing methods for making the results of users queries more accurate
using query by example. For example, QR codes.
Video search: Videos can be searched for simple metadata or by complex metadata
generated by indexing. The audio contained in the videos is usually scanned by audio
search engines.
Audio search engine
There are different methods of audio searching:
Voice search engine: Allows the user to search using speech instead of text. It uses
algorithms of speech recognition. An example of this technology is Google Voice
Search.
Music search engine: Although most of applications which searches music works on
simple metadata (artist, name of track, album…) . There are some programs of music
recognition, for example Shazam or SoundHound.
Image Search – Video Search – Audio Search:
Searching
To search for media with the Shutterstock API, you need:
An account at https://www.shutterstock.com
An application at https://www.shutterstock.com/account/developers/apps
One of these types of authentication:
o Basic authentication: You provide the client ID and client secret with
each API request.
o OAuth authentication: You use the client ID and client secret to get
an authentication token and use this token in each request. In the
examples on this page, the access token is in
the SHUTTERSTOCK_API_TOKEN environment variable. For more
information about authentication, see Authentication.
Some API subscriptions return a limited set of results. See Subscriptions in the API
reference.
Keyword searches
To search for media with a keyword, pass the search keywords to the appropriate
endpoint:
-G \
--data-urlencode "query=kites"
-G \
-G \
--data-urlencode "query=bluegrass"
Conditional searches
Searches for images and video can use AND, OR, and NOT conditions, but searches
for audio do not support these keywords.
To use AND, OR, or NOT in searches, you include these operators in
the query query parameter. The operators must be in upper case and they must be
in English, regardless of the language the keywords are in.
You can group conditional search terms with parentheses. For example, to search for
images with either dogs or cats, but not both, use (dog NOT cat) OR (cat NOT
dog).
Here are some examples of searching with conditions:
cURL
CLI
PHP
JavaScript
curl -X GET "https://api.shutterstock.com/v2/images/search" \
-G \
--data-urlencode "query=dog AND cat"
-G \
Bulk searches
You can run up to 5 image searches in a single request with the POST
/v2/images/bulk_search endpoint. The API returns a maximum of 20 results for
each search. To use this endpoint, pass multiple searches in the queries array in
the request body. Each search in the queries array has the same parameters as an
individual image search.
You can also pass query parameters in the request. These parameters become
defaults for each search, but the parameters in the individual searches override
them.
Here is an example of sending 2 searches in a single request:
DATA='[
"query": "cat",
"license": ["editorial"],
"sort": "popular"
},
"query": "dog",
"orientation": "horizontal"
]'
-H "Content-Type: application/json" \
-d "$DATA"
GEOGRAPHIC INFORMATION
SYSTEMS (GIS) SEARCH:
What We Do
Geographic Information Systems (GIS) is used to enhance decision making and provide
maps and information to the public and County agencies including:
Assessors Office
Community Development Resource Agency
County Executive Office
Elections
Facility Services
Health and Human Services
Public Works
Requests
For zoning information, please email our Public Counter Technicians.
For GIS/mapping requests (not zoning), please email the GIS Division.
Multilingual Searches :
Multilingual meta-search engines: These search engines aggregate results from
multiple search engines, each specialized in a particular language or region.
They provide users with a unified interface to search across different search
engines and retrieve multilingual results from various sources.
A Moroccan immigrant who speaks some French and German in addition to his
Arabic dialect is multilingual, as is the conference interpreter who confidently
uses her three native or first languages of English, German and French as
working languages..
The Main Benefits of Multilingualism
Cognitive Benefits. Learning and knowing several languages sharpens the
mind and improves memory. ...
Improvement of Ability to Multitask. Being able to speak in multiple languages
allows you to perform better in many ways. ...
Improvement of Communication Skills.
DATA VISUALIZATION:
In the world of Big Data, data visualization tools and technologies are
essential to analyze massive amounts of information and make data-driven
decisions.
Advantages
Our eyes are drawn to colors and patterns. We can quickly identify red from
blue, and squares from circles. Our culture is visual, including everything from
art and advertisements to TV and movies. Data visualization is another form of
visual art that grabs our interest and keeps our eyes on the message. When
we see a chart, we quickly see trends and outliers. If we can see something, we
internalize it quickly. It’s storytelling with a purpose. If you’ve ever stared at a
massive spreadsheet of data and couldn’t see a trend, you know how much
more effective a visualization can be.
Disadvantages
While there are many advantages, some of the disadvantages may seem less
obvious. For example, when viewing a visualization with many different
datapoints, it’s easy to make an inaccurate assumption. Or sometimes the
visualization is just designed wrong so that it’s biased or confusing.
It’s hard to think of a professional industry that doesn’t benefit from making
data more understandable. Every STEM field benefits from understanding data
—and so do fields in government, finance, marketing, history, consumer
goods, service industries, education, sports, and so on.
While we’ll always wax poetically about data visualization (you’re on the
Tableau website, after all) there are practical, real-life applications that are
undeniable. And, since visualization is so prolific, it’s also one of the most
useful professional skills to develop. The better you can convey your points
visually, whether in a dashboard or a slide deck, the better you can leverage
that information. The concept of the citizen data scientist is on the rise. Skill
sets are changing to accommodate a data-driven world. It is increasingly
valuable for professionals to be able to use data to make decisions and use
visuals to tell stories of when data informs the who, what, when, where, and
how.
Task-based effectiveness of
basic visualizations
This is a summary of a recent paper on an age-old topic: what visualisation
should I use? No prizes for guessing “it depends!” Is this the paper to finally
settle the age-old debate surrounding pie-charts??
So far this week we’ve seen how to create all sorts of fantastic
Saket et al. look at five of the most basic visualisations —bar charts,
line charts, pie charts, scatterplots, and tables— and study their
from the work of Amar et al., describing a set of ten low-level analysis
1. Finding anomalies
attribute values)
3. Finding correlations (determining whether or not there is a
value
5. Characterising distributions, for example, figuring out which
points).
DATA & VIEW SPECIFICATION:
Data and View Specification involve determining which data is to be shown and visualized with
programs such as Microsoft Excel. Then it involves the filtering of data which shifts the focus
among the different data subsets to isolate specific categories of values. Sorting the data can show
surface trends and clusters and organize data according to a unit of analysis. The following image
shows a more complex form of a matrix-based visualization of a social network.
View Manipulation :
The second dynamic is View Manipulation which consists of highlighting patterns, investigation of
hypotheses and revealing additional details. Selection allows for pointing to an item of interest, for
example, dragging along axes to "create interactive selections that highlight automobiles with low
weight and high mileage." Navigating is determined by where the analyst begins, such as in a crime
map that depicts crime activity by time and region. Coordinating allows the analyst to see multiple
coordinated views at once which can facilitate comparison. This can be done in histograms, maps
or network diagrams. The following image shows a complex patchwork of interlinked tables, plots
and maps to analyze outcomes of elections in Michigan.
The image shows a combination of tables, plots and maps. The final step, organization, involves
arranging visualization views, legends and controls for more simplified viewing.
The final dynamic is Process and Provenance which involves the actual interpretation of data.
Recording involves chronicling and visualizing analysts' interaction histories in both a
chronological and sequential fashion. Annotation includes recording, organizing and
communicating insights gained during analysis. Sharing involves the accumulation of multiple
analyses and interpretations derived from several people and the dissemination of results. Guiding
is the final step and includes developing new strategies to guide newcomers.
Visualizing data is crucial to ensure that all the data you collect translates into
decisions that amplify your business growth. Here are five data visualizations
that are commonly used by companies across the world.
1. Bar Chart
2. Doughnut Chart or Pie Chart
3. Line Graph or Line Chart
4. Pivot Table
5. Scatter Plot
1. Bar Chart
Bar charts or column charts have rectangle bars arranged on the X or Y-axis.
Comparing objects by aligning them with the same parameters is the most
popular visualization out there. Bar charts can be used to track changes over
time. However, bar graphs used for time series yield accurate results when
the changes are considerably large. There are different categories of bar
graphs like stacked bar graphs, 100% stacked bar graphs, grouped bar
graphs, box plots, and waterfall charts (advanced bar graphs).
Use Case: You can use a bar chart for a visual representation of your overall
business revenue against your peers. If you want to compare the individual
split of product-wise revenue, your best choice would be a stacked/grouped
bar chart.
Use Case: You can analyze your budget using a doughnut chart. Splitting the
total budget across your expenses, investments, loan, savings, etc. would
give you an instant understanding of your budget plan.
Use Case: Say, you want to analyze your business’ month-wise expenditure.
The line graph will give you the best rendition by plotting the values across
months on the X-axis and expenditure on the Y-axis.
4. Pivot Table
Pivot table as the name indicates has columns and rows with aggregated
values populated in the cells. The pivot table is the most straight-forward
visualization that can be used to convey a huge amount of data at a single
glance. It is easy to build and flexible to modify. However, unlike the other
infographic visualizations discussed here, tables are not graphical and hence
can be used only in specific cases —
Use Case: Financial reports are generally depicted over tables. Bringing in
the years on rows and operating cash flow, investing cash flow, cash from
financing, and other metrics on the columns will help you understand your
business’s cash flow over the years.
5. Scatter Plot
Scatter plot shows the relationship of the common attribute between two
numerical variables plotted along both X and Y axes. If you are a data
scientist working with different sets of data, scatter plot would be something
that you commonly work with, but for a novice user, it could be a little
unfamiliar. Scatter plots are best suitable to compare two numerical values
simultaneously. Segmentation charts and bubble charts are the advanced
versions of the scatter plot. The segmentation chart demarcates the scatter
plot into four quadrants, making the choice of the users easier. Bubble sort
brings in an additional dimension to the chart by displaying varied sizes of
bubbles over the scatter plot.
Use Case: You can present data for product price revision using the scatter
plot by bringing in the number of units sold on the X-axis, current price on the
Y-axis, and the products on the quadrant. It will give you a clear outlook on
the products that have a low price yet have sold a good deal and can be
considered for price increment. Alternatively, you can bring in a price drop for
products that have a high price but are in low demand.
These types of charts, along with area charts, heat maps, and treemaps are
widely used visualization techniques by data analysts, marketers, and
financial analysts across the world. However, there are specific visualizations
that can be used to tackle the reporting needs of unique data sets – for
instance, it would be ideal to use choropleth (Map chart), tree diagram, and
radar chart for geospatial, conditional, and multivariable data points
respectively. Choosing the perfect visualization from different types of data
visualizations can be challenging, but with a basic understanding of these
fundamental charts, your choice will be easier.
Data visualization is a quick and simple technique to depict complicated ideas for improved
comprehension and intuition graphically. It must find diverse relationships and patterns
concealed by the massive amounts of data. We can use traditional graphical representations to
organize and represent data. Still, it can be challenging to display Huge amounts of data that
is very diverse uncertain, and semi-structured or unstructured in real-time. Massive
parallelization is required to handle and visualize such dynamic data. In this article, we will
cover the major challenges faced by big data visualization and its solutions.
Content
Data Quality
Accuracy
Completeness
Consistency
Format
Integrity
Timeliness
Not Choosing the Right Data Visualization Tools
Confusing Color Palate
Analytical & Technical Challenges
Data Quality
The data quality plays a crucial role in determining its effectiveness. Not all data is created in
the same way, and each has a different origin, hence its heterogeneity.
No matter how powerful and comprehensive the big data tools at the organization’s disposal
are, insufficient or incomplete data can often lead data scientists to conclusions that may not
be entirely correct and, therefore, could negatively impact business.
The effectiveness of big data in analysis depends on the accuracy, consistency, relevance,
completeness, and updating of the data used. With these factors, the data analysis ceases to be
reliable.
Accuracy
The most important thing is: How accurate is the data? How much can you trust it? Is there
certainty about the collection of relevant data? The values in each field of the database must
be correct and accurately represent “real world” values.
Example: A registered address must be a real address. Names must be spelled correctly.
Completeness
The data must contain all the necessary information and be easily understandable by the user.
Example: If the first and last name are required in a form, but the middle name is optional,
the form can be considered complete even if the middle name is not entered.
Consistency
The data must be the same throughout the organization and in all systems.
Example: The data of a sale registered in the CRM of a company must match the data
registered in the accounting dashboard that you manage.
Format
The data must meet certain standards of type, format, size, etc.
Example: All dates must follow the format DD/MM/YY, or whichever format you prefer.
Names should only have letters, no numbers or symbols.
Integrity
The data must be valid, which means that there are registered relationships that connect all the
data. Keep in mind that unlinked records could introduce duplicate entries to your system.
Example: If you have an address registered in your database, but it is not linked to any
individual, business, or other entity, it is invalid.
Timeliness
Data must be available when the user expects and needs it.
Example: A hospital must track the time of patient care events in the ICU or emergency room
to assist doctors and nurses.
Many data visualization tools, such as Tableau, Microsoft Power BI, Looker, Sisense, Qlik,
etc., offer data visualization integration capabilities. If your organization already uses one of
these tools, start there. If not, try one. Once you select a tool, you’ll need to do a series of
prototypes to validate capabilities, ease of use, and operational considerations.