Trending February 2024 # All You Need To Know About Google’s Project Soli # Suggested March 2024 # Top 6 Popular

You are reading the article All You Need To Know About Google’s Project Soli updated in February 2024 on the website Hatcungthantuong.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested March 2024 All You Need To Know About Google’s Project Soli

On Saturday, a couple of interesting Google Pixel 4 images started circulating on Twitter, showing an oval-shaped cutout on the right of the detached bezels for the Pixel 4 and Pixel 4XL.

A lot of theories have come to the fore since then, but the majority of them hint at Project Soli integration. We are not sure whether the Pixel 4 devices will feature Project Soli or not, but the technology itself is fascinating, to say the least.

So, without further ado, let’s get to it.

What is Project Soli?

Google has assigned some of its best engineers to create a piece of hardware which would allow the human hand to act as a universal input device. In smartphones and wearables, we currently use the touch panel to interact with the system. Soli aims to remove the middleman (touch panel) by letting you interact with your device using simple hand gestures, without making contact.

How does it work?

The Soli sensor emits electromagnetic waves in a broad beam. The objects in the beam’s path scatter this energy, feeding some of it back to the radar antenna. The radar processes some properties of the returned signal, such as energy, time delay, and frequency shift. These properties, in turn, allow Soli to identify the object’s size, shape, orientation, material, distance, and velocity.

Soli’s spatial resolution has been fine-tuned to pick up most subtle finger gestures, meaning that it doesn’t need large bandwidth and high spatial resolution to work. The radar tracks subtle variation in the received signal over time to decode finger movements and distorting hand shapes.

How to use Soli?

Soli uses Virtual Tools to understand hand/finger gestures and carry out the tasks associated with them. According to Google, Virtual Gestures are hand gestures that mimic familiar interactions with physical tools.

Imagine holding a key between your thumb and index finger. Now, rotate the key as if you were opening a lock. That’s it. Soli, in theory, will pick up the gesture and perform the task associated with it.

So far, Soli recognizes three primary Virtual Gestures.

Button

Imagine an invisible button between your thumb and index fingers, press the button by tapping the fingers together. Primary use is expected to be selecting an application, perform in-app actions.

Dial

Imagine a dial that you turn up or down by rubbing your thumb against the index finger. Primary use is expected to be volume control.

Slider

Finally, think about using a Virtual Slider. Brush your thumb across your index finger to act. Primary use is expected to be controlling horizontal sliders, such as brightness control.

Soli generates feedback by assessing the haptic sensation of fingers touching one another.

What are the applications?

As no mainstream device has implemented Soli so far, it’s hard to guess how it’d perform in the wild. But if all goes according to plan, the radar could become fundamental to smartphones, wearables, IoT components, and even cars in the future.

The radar is super compact — 8mm x 10mm — doesn’t use much energy, has no moving parts, and packs virtually endless potential. Taking all of that into consideration, a contact-less smartphone experience doesn’t seem that far-fetched.

You're reading All You Need To Know About Google’s Project Soli

Bert Explained: What You Need To Know About Google’s New Algorithm

Google’s newest algorithmic update, BERT, helps Google understand natural language better, particularly in conversational search.

BERT will impact around 10% of queries. It will also impact organic rankings and featured snippets. So this is no small change!

But did you know that BERT is not just any algorithmic update, but also a research paper and machine learning natural language processing framework?

In fact, in the year preceding its implementation, BERT has caused a frenetic storm of activity in production search.

On November 20, I moderated a Search Engine Journal webinar presented by Dawn Anderson, Managing Director at Bertey.

Anderson explained what Google’s BERT really is and how it works, how it will impact search, and whether you can try to optimize your content for it.

Here’s a recap of the webinar presentation.

What Is BERT in Search?

BERT, which stands for Bidirectional Encoder Representations from Transformers, is actually many things.

It’s more popularly known as a Google search algorithm ingredient /tool/framework called Google BERT which aims to help Search better understand the nuance and context of words in Searches and better match those queries with helpful results.

BERT is also an open-source research project and academic paper. First published in October 2023 as BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, the paper was authored by Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova.

Additionally, BERT is a natural language processing NLP framework that Google produced and then open-sourced so that the whole natural language processing research field could actually get better at natural language understanding overall.

You’ll probably find that most mentions of BERT online are NOT about the Google BERT update.

There are lots of actual papers about BERT being carried out by other researchers that aren’t using what you would consider as the Google BERT algorithm update.

BERT has dramatically accelerated natural language understanding NLU more than anything and Google’s move to open source BERT has probably changed natural language processing forever.

The machine learning ML and NLP communities are very excited about BERT as it takes a huge amount of heavy lifting out of their being able to carry out research in natural language. It has been pre-trained on a lot of words – and on the whole of the English Wikipedia 2,500 million words.

Vanilla BERT provides a pre-trained starting point layer for neural networks in machine learning and natural language diverse tasks.

While BERT has been pre-trained on Wikipedia, it is fine-tuned on questions and answers datasets.

One of those question-and-answer data sets it can be fine-tuned on is called MS MARCO: A Human Generated MAchine Reading COmprehension Dataset built and open-sourced by Microsoft.

There are real Bing questions and answers (anonymized queries from real Bing users) that’s been built into a dataset with questions and answers for ML and NLP researchers to fine-tune and then they actually compete with each other to build the best model.

Researchers also compete over Natural Language Understanding with SQuAD (Stanford Question Answering Dataset). BERT now even beats the human reasoning benchmark on SQuAD.

Lots of the major AI companies are also building BERT versions:

Microsoft extends on BERT with MT-DNN (Multi-Task Deep Neural Network).

RoBERTa from Facebook.

SuperGLUE Benchmark was created because the original GLUE Benchmark became too easy.

What Challenges Does BERT Help to Solve?

There are things that we humans understand easily that machines don’t really understand at all including search engines.

The Problem with Words

The problem with words is that they’re everywhere. More and more content is out there

Words are problematic because plenty of them are ambiguous, polysemous, and synonymous.

Bert is designed to help solve ambiguous sentences and phrases that are made up of lots and lots of words with multiple meanings.

Ambiguity & Polysemy

Almost every other word in the English language has multiple meanings. In spoken word, it is even worse because of homophones and prosody.

For instance, “four candles” and “fork handles” for those with an English accent. Another example: comedians’ jokes are mostly based on the play on words because words are very easy to misinterpret.

It’s not very challenging for us humans because we have common sense and context so we can understand all the other words that surround the context of the situation or the conversation – but search engines and machines don’t.

This does not bode well for conversational search into the future.

Word’s Context

“The meaning of a word is its use in a language.” – Ludwig Wittgenstein, Philosopher, 1953

Basically, this means that a word has no meaning unless it’s used in a particular context.

The meaning of a word changes literally as a sentence develops due to the multiple parts of speech a word could be in a given context.

Case in point, we can see in just the short sentence “I like the way that looks like the other one.” alone using the Stanford Part-of-Speech Tagger that the word “like” is considered to be two separate parts of speech (POS).

The word “like” may be used as different parts of speech including verb, noun, and adjective.

So literally, the word “like” has no meaning because it can mean whatever surrounds it. The context of “like” changes according to the meanings of the words that surround it.

The longer the sentence is, the harder it is to keep track of all the different parts of speech within the sentence.

On NLR & NLU

Natural Language Recognition Is NOT Understanding

Natural language understanding requires an understanding of context and common sense reasoning. This is VERY challenging for machines but largely straightforward for humans.

Natural Language Understanding Is Not Structured Data

Structured data helps to disambiguate but what about the hot mess in between?

Not Everyone or Thing Is Mapped to the Knowledge Graph

There will still be lots of gaps to fill. Here’s an example.

As you can see here, we have all these entities and the relationships between them. This is where NLU comes in as it is tasked to help search engines fill in the gaps between named entities.

How Can Search Engines Fill in the Gaps Between Named Entities?

Natural Language Disambiguation

“You shall know a word by the company it keeps.” – John Rupert Firth, Linguist, 1957

Words that live together are strongly connected:

Co-occurrence.

Co-occurrence provides context.

Co-occurrence changes a word’s meaning.

Words that share similar neighbors are also strongly connected.

Similarity and relatedness.

…and build vector space models for word embeddings.

The NLP models learn the weights of the similarity and relatedness distances. But even if we understand the entity (thing) itself, we need to understand word’s context

On their own, single words have no semantic meaning so they need text cohesion. Cohesion is the grammatical and lexical linking within a text or sentence that holds a text together and gives it meaning.

Semantic context matters. Without surrounding words, the word “bucket” could mean anything in a sentence.

He kicked the bucket.

I have yet to cross that off my bucket list.

The bucket was filled with water.

An important part of this is part-of-speech (POS) tagging:

How BERT Works

Past language models (such as Word2Vec and Glove2Vec) built context-free word embeddings. BERT, on the other hand, provides “context”.

To better understand how BERT works, let’s look at what the acronym stands for.

B: Bi-directional

Previously all language models (i.e., Skip-gram and Continuous Bag of Words) were uni-directional so they could only move the context window in one direction – a moving window of “n” words (either left or right of a target word) to understand word’s context.

Most language modelers are uni-directional. They can traverse over the word’s context window from only left to right or right to left. Only in one direction, but not both at the same time.

BERT is different. BERT uses bi-directional language modeling (which is a FIRST).

BERT can see the WHOLE sentence on either side of a word contextual language modeling and all of the words almost at once.

ER: Encoder Representations

What gets encoded is decoded. It’s an in-and-out mechanism.

T: Transformers

BERT uses “transformers” and “masked language modeling”.

One of the big issues with natural language understanding in the past has been not being able to understand in what context a word is referring to.

Pronouns, for instance. It’s very easy to lose track of who’s somebody’s talking about in a conversation. Even humans can struggle to keep track of who somebody’s being referred to in a conversation all the time.

That’s kind of similar for search engines, but they struggle to keep track of when you say he, they, she, we, it, etc.

So transformers’ attention part of this actually focuses on the pronouns and all the words’ meanings that go together to try and tie back who’s being spoken to or what is being spoken about in any given context.

Masked language modeling stops the target word from seeing itself. The mask is needed because it prevents the word that’s under focus from actually seeing itself.

When the mask is in place, BERT just guesses at what the missing word is. It’s part of the fine-tuning process as well.

What Types of Natural Language Tasks Does BERT Help With?

BERT will help with things like:

Named entity determination.

Textual entailment next sentence prediction.

Coreference resolution.

Question answering.

Word sense disambiguation.

Automatic summarization.

Polysemy resolution.

How BERT Will Impact Search BERT Will Help Google to Better Understand Human Language

BERT’s understanding of the nuances of human language is going to make a massive difference as to how Google interprets queries because people are searching obviously with longer, questioning queries.

BERT Will Help Scale Conversational Search

BERT will also have a huge impact on voice search (as an alternative to problem-plagued Pygmalion).

Expect Big Leaps for International SEO

BERT has this mono-linguistic to multi-linguistic ability because a lot of patterns in one language do translate into other languages.

There is a possibility to transfer a lot of the learnings to different languages even though it doesn’t necessarily understand the language itself fully.

Google Will Better Understand ‘Contextual Nuance’ & Ambiguous Queries

A lot of people have been complaining that their rankings have been impacted.

But I think that that’s probably more because Google in some way got better at understanding the nuanced context of queries and the nuanced context of content.

So perhaps, Google will be better able to understand contextual nuance and ambiguous queries.

Should You (or Can You) Optimize Your Content for BERT?

Probably not.

Google BERT is a framework of better understanding. It doesn’t judge content per se. It just better understands what’s out there.

For instance, Google Bert might suddenly understand more and maybe there are pages out there that are over-optimized that suddenly might be impacted by something else like Panda because Google’s BERT suddenly realized that a particular page wasn’t that relevant for something.

That’s not saying that you’re optimizing for BERT, you’re probably better off just writing natural in the first place.

[Video Recap] BERT Explained: What You Need to Know About Google’s New Algorithm

Watch the video recap of the webinar presentation.

Or check out the SlideShare below.

Image Credits

All screenshots taken by author, November 2023

Join Us For Our Next Webinar! KPIs, Metrics & Benchmarks That Matter For SEO Success In 2023

Reserve my Seat

Everything You Need To Know About Scikit

Introduction

Scikit-learn is one Python library we all inevitably turn to when we’re building machine learning models. I’ve built countless models using this wonderful library and I’m sure all of you must have as well.

There’s no question – scikit-learn provides handy tools with easy-to-read syntax. Among the pantheon of popular Python libraries, scikit-learn ranks in the top echelon along with Pandas and NumPy. These three Python libraries provide a complete solution to various steps of the machine learning pipeline.

I love the clean, uniform code and functions that scikit-learn provides. It makes it really easy to use other techniques once we have mastered one. The excellent documentation is the icing on the cake as it makes a lot of beginners self-sufficient with building machine learning models.

The developers behind scikit-learn have come up with a new version (v0.22) that packs in some major updates. I’ll unpack these features for you in this article and showcase what’s under the hood through Python code.

Note: Looking to learn Python from scratch? This free course is the perfect starting point!

Table of Contents

Getting to Know Scikit-Learn

A Brief History of Scikit-Learn

Scikit-Learn v0.22 Updates (with Python implementation)

Stacking Classifier and Regressor

Permutation-Based Feature Importance

Multi-class Support for ROC-AUC

kNN-Based Imputation

Tree Pruning

Getting to Know Scikit-Learn

This library is built upon the SciPy (Scientific Python) library that you need to install before you can use scikit-learn. It is licensed under a permissive simplified BSD license and is distributed under many Linux distributions, encouraging academic and commercial use.

Overall, scikit-learn uses the following libraries behind the scenes:

NumPy: n-dimensional array package

SciPy: Scientific computing Library

Matplotlib:  Plotting Library

iPython: Interactive python (for Jupyter Notebook support)

SymPy: Symbolic mathematics

Pandas: Data structures, analysis, and manipulation

Lately, scikit-learn has reorganized and restructured its functions & packages into six main modules:

Classification: Identifying which category an object belongs to

Regression: Predicting a continuous-valued attribute associated with an object

Clustering: For grouping unlabeled data

Dimensionality Reduction: Reducing the number of random variables to consider

Model Selection: Comparing, validating and choosing parameters and models

Preprocessing: Feature extraction and normalization

scikit-learn provides the functionality to perform all the steps from preprocessing, model building, selecting the right model, hyperparameter tuning, to frameworks for interpreting machine learning models.

Scikit-learn Modules (Source: Scikit-learn Homepage)

A Brief History of Scikit-learn

Scikit-learn has come a long way from when it started back in 2007 as scikits.learn. Here’s a cool trivia for you – scikit-learn was a Google Summer of Code project by David Cournapeau!

This was taken over and rewritten by Fabian Pedregosa, Gael Varoquaux, Alexandre Gramfort and Vincent Michel, all from the French Institute for Research in Computer Science and Automation and its first public release took place in 2010.

Since then, it has added a lot of features and survived the test of time as the most popular open-source machine learning library across languages and frameworks. The below infographic, prepared by our team, illustrates a brief timeline of all the scikit-learn features along with their version number:

The above infographics show the release of features since its inception as a public library for implementing Machine Learning Algorithms from 2010 to 2023

Today, Scikit-learn is being used by organizations across the globe, including the likes of Spotify, JP Morgan, chúng tôi Evernote, and many more. You can find the complete list here with testimonials I believe this is just the tip of the iceberg when it comes to this library’s popularity as there will a lot of small and big companies using scikit-learn at some stage of prototyping models.

The latest version of scikit-learn, v0.22, has more than 20 active contributors today. v0.22 has added some excellent features to its arsenal that provide resolutions for some major existing pain points along with some fresh features which were available in other libraries but often caused package conflicts.

We will cover them in detail here and also dive into how to implement them in Python.

Scikit-Learn v0.22 Updates

Along with bug fixes and performance improvements, here are some new features that are included in scikit-learn’s latest version.

Stacking Classifier & Regressor

Stacking is an ensemble learning technique that uses predictions from multiple models (for example, decision tree, KNN or SVM) to build a new model.

This model is used for making predictions on the test set. Below is a step-wise explanation I’ve taken from this excellent article on ensemble learning for a simple stacked ensemble:

The base model (in this case, decision tree) is then fitted on the whole train dataset

This model is used to make final predictions on the test prediction set

The mlxtend library provides an API to implement Stacking in Python. Now, sklearn, with its familiar API can do the same and it’s pretty intuitive as you will see in the demo below. You can either import StackingRegressor & StackingClassifier depending on your use case:

from

sklearn.linear_model

import

LogisticRegression

from sklearn.ensemble import RandomForestClassifier from chúng tôi import DecisionTreeClassifier

from

sklearn.ensemble

import

StackingClassifier

from

sklearn.model_selection

import

train_test_split

X

,

y

=

load_iris

(

return_X_y

=

True

)

estimators

=

[

(

'rf'

,

RandomForestClassifier

(

n_estimators

=

10

,

random_state

=

42

)),

(

'dt'

,

DecisionTreeClassifier

(

random_state

=

42

)

)

]

clf

=

StackingClassifier

(

estimators

=

estimators

,

final_estimator

=

LogisticRegression

()

)

X_train

,

X_test

,

y_train

,

y_test

=

train_test_split

(

X

,

y

,

stratify

=

y

,

random_state

=

42

)

clf

.

fit

(

X_train

,

y_train

)

.

score

(

X_test

,

y_test

)

Permutation-Based Feature Importance

As the name suggests, this technique provides a way to assign importance to each feature by permuting each feature and capturing the drop in performance.

But what does permuting mean here? Let us understand this using an example.

Let’s say we are trying to predict house prices and have only 2 features to work with:

LotArea – (Sq Feet area of the house)

YrSold (Year when it was sold)

The test data has just 10 rows as shown below:

Next, we fit a simple decision tree model and get an R-Squared value of 0.78. We pick a feature, say LotArea, and shuffle it keeping all the other columns as they were:

Next, we calculate the R-Squared once more and it comes out to be 0.74. We take the difference or ratio between the 2 (0.78/0.74 or 0.78-0.74), repeat the above steps, and take the average to represent the importance of the LotArea feature.

We can perform similar steps for all the other features to get the relative importance of each feature. Since we are using the test set here to evaluate the importance values, only the features that help the model generalize better will fare better.

Earlier, we had to implement this from scratch or import packages such as ELI5. Now, Sklearn has an inbuilt facility to do permutation-based feature importance. Let’s get into the code to see how we can visualize this:



As you can see in the above box plot, there are 3 features that are relatively more important than the other 4. You can try this with any model, which makes it a model agnostic interpretability technique. You can read more about this machine learning interpretability concept here.

Multiclass Support for ROC-AUC

The ROC-AUC score for binary classification is super useful especially when it comes to imbalanced datasets. However, there was no support for Multi-Class classification till now and we had to manually code to do this. In order to use the ROC-AUC score for multi-class/multi-label classification, we would need to binarize the target first.

Currently, sklearn has support for two strategies in order to achieve this:

from sklearn.datasets import load_iris  from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import roc_auc_score X, y = load_iris(return_X_y=True) rf = RandomForestClassifier(random_state=44, max_depth=2) rf.fit(X,y) print(roc_auc_score(y, rf.predict_proba(X), multi_class='ovo'))

Also, there is a new plotting API that makes it super easy to plot and compare ROC-AUC curves from different machine learning models. Let’s see a quick demo:

from

sklearn.model_selection

import

train_test_split

from

sklearn.svm

import

SVC

from

sklearn.metrics

import

plot_roc_curve

from

sklearn.ensemble

import

RandomForestClassifier

from

sklearn.datasets

import

make_classification

import

matplotlib.pyplot

as

plt

X

,

y

=

make_classification

(

random_state

=5

)

X_train

,

X_test

,

y_train

,

y_test

=

train_test_split

(

X

,

y

,

random_state

=

42

)

svc

=

SVC

(

random_state

=

42

)

svc

.

fit

(

X_train

,

y_train

)

rfc

=

RandomForestClassifier

(

random_state

=

42

)

rfc

.

fit

(

X_train

,

y_train

)

svc_disp

=

plot_roc_curve

(

svc

,

X_test

,

y_test

)

rfc_disp

=

plot_roc_curve

(

rfc

,

X_test

,

y_test

,

ax

=

svc_disp

.

ax_

)

rfc_disp

.

figure_

.

suptitle

(

"ROC curve comparison"

)

plt

.

show

()

In the above figure, we have a comparison of two different machine learning models, namely Support Vector Classifier & Random Forest. Similarly, you can plot the AUC-ROC curve for more machine learning models and compare their performance.

kNN-Based Imputation

In kNN based imputation method, the missing values of an attribute are imputed using the attributes that are most similar to the attribute whose values are missing. The assumption behind using kNN for missing values is that a point value can be approximated by the values of the points that are closest to it, based on other variables.

The k-nearest neighbor can predict both qualitative & quantitative attributes

Creation of predictive machine learning model for each attribute with missing data is not required

Correlation structure of the data is taken into consideration

Scikit-learn supports kNN-based imputation using the Euclidean distance method. Let’s see a quick demo:

import

numpy

as

np

from

sklearn.impute

import

KNNImputer

X

=

[[4

,

6

,

np

.

nan

],

[

3

,

4

,

3

],

[

np

.

nan

,

6

,

5

],

[

8

,

8

,

9

]]

imputer

=

KNNImputer

(

n_neighbors

=

2

)

print

(

imputer

.

fit_transform

(

X

))

You can read about how kNN works in comprehensive detail here.

Tree Pruning

In basic terms, pruning is a technique we use to reduce the size of decision trees thereby avoiding overfitting. This also extends to other tree-based algorithms such as Random Forests and Gradient Boosting. These tree-based machine learning methods provide parameters such as min_samples_leaf and max_depth to prevent a tree from overfitting.

Pruning provides another option to control the size of a tree. XGBoost & LightGBM have pruning integrated into their implementation. However, a feature to manually prune trees has been long overdue in Scikit-learn (R already provides a similar facility as a part of the rpart package).

In its latest version, Scikit-learn provides this pruning functionality making it possible to control overfitting in most tree-based estimators once the trees are built. For details on how and why pruning is done, you can go through this excellent tutorial on tree-based methods by Sunil. Let’s look at a quick demo now:

from sklearn.ensemble import RandomForestClassifier from sklearn.datasets import make_classification X

,

y

=

make_classification

(

random_state

=

0

)

rf

=

RandomForestClassifier

(

random_state

=

0

,

ccp_alpha

=

0

)

.

fit

(

X

,

y

)

print

(

"Average number of nodes without pruning

{:.1f}

"

.

format

(

np

.

mean

([

e

.

tree_

.

node_count

for

e

in

rf

.

estimators_

])))

rf

=

RandomForestClassifier

(

random_state

=

0

,

ccp_alpha

=

0.1

)

.

fit

(

X

,

y

)

print

(

"Average number of nodes with pruning

{:.1f}

"

.

format

(

np

.

mean

([

e

.

tree_

.

node_count

for

e

in

rf

.

estimators_

])))

End Notes

The scikit-learn package is the ultimate go-to library for building machine learning models. It is the first machine learning-focused library all newcomers lean on to guide them through their initial learning process. And even as a veteran, I often find myself using it to quickly test out a hypothesis or solution I have in mind.

The latest release definitely has some significant upgrades as we just saw. It’s definitely worth exploring on your own and experimenting using the base I have provided in this article.

Related

Everything You Need To Know About Twitter Advanced Search

And while many still view Twitter as a mere social media platform for following friends and being followed, a significant number are exploiting its immense potential as a business marketing platform, specifically for lead generation.

In fact, 82% of B2B marketers preferred using Twitter in 2023, which makes it second only to Linkedin for that audience.

Who Will Marketers Find With Twitter Search?

Contrary to earlier expectations – that Twitter growth would decline or flatten out – statistics show that Twitter growth is anything but stagnant.

Based on Twitter’s most recent quarterly filing, usage data shows that the platform grew by 24% relative to the preceding quarter, representing an additional 14 million users.

In the US, as of September 2023, 52% of Twitter users reported accessing (and probably using) the social media platform every, single, day.

And the Twitter Search function— specifically its Advanced Search feature— has huge business potential, through lead generation, for changing the marketing landscape.

What Exactly Is the Twitter Advanced Search Feature?

There are two Search features: the general Search feature (that you’ve most likely used) and the Advanced Search feature.

Let’s first explain the general Search feature.

The general Search feature is on the right-hand side of your laptop.

If you’re using a smartphone, however, the general Search is represented by the magnifying icon at the bottom of your mobile phone screen.

You can’t glean much from these general searches because your search range is limited.

That’s where the Advanced Search feature comes in.

How to Access Twitter’s Advanced Search Feature

You can access the Twitter Advanced Search by first using the general Search feature referenced above.

On your laptop, key in your search phrase. Let’s choose [Tesla], the company associated with Elon Musk.

After typing [Tesla] in the general search field, you’ll spot some three straight dots on the right-hand side of the screen.

If you point your cursor at the three dots, the word “more” will pop up.

By now, you’re probably saying to yourself, “That’s great for desktop users. But what about mobile?”

Advanced Search Fields

You’re going to get a pretty long list of search fields to choose from, which can be a little overwhelming. For example, under the heading “Words,” this list appears:

All of these words.

This exact phrase.

Any of these words.

None of these words.

These hashtags.

Written in (language).

Other headings include:

People.

Places.

Dates.

Other.

What’s cool about these fields is that you aren’t limited to choosing just one. So, for example, you could perform a search that combined an exact phrase, from specific accounts, during a specific date range.

Before moving on to what happens after you hit Search, let’s explore some of the common search queries and how you can use them to refine your searches.

How to Use the “Words” Field in Advanced Search

Perhaps the most commonly used search feature, there are six fields under the “Words” Search category on Twitter’s Advanced Search.

Let’s take a look at each so you understand what they mean.

All of These Words

Use this field to find tweets that include the words or phrase you type. To ensure the best search results, use quotation marks.

If you’re searching for tweets and you aren’t sure of the exact phrase, you can query some relevant and specific words — whatever you think the original tweet or tweets must have contained.

This Exact Phrase

Use this field to enter a very specific phrase.

So if you were searching for tweets about Heisman Trophy winners that attended USC, for example, you would enter [USC Heisman Trophy Winners] instead of just [Heisman Trophy Winners].

One cool thing is that quotes are automatically added in this field.

Any of These Words

Use this field to search for a group of related words.

The search field would add the word “or” between each word or phrase you typed to trigger the most specific results.

None of These Words

If you want to search Twitter but omit certain words or phrases, this is your search option. Using this field will eliminate any tweets that contain words or phrases you don’t want to be included in a search result.

For example, you may want results for [USC] but exclude any mention of [USC football].

This is an effective way for businesses to exclude search results for the names of other companies in the same industry.

These Hashtags

If you want to search for a hashtag, whether it’s a hashtag of your brand or the hashtag of a competitor, this is the relevant search option. For example, if you’re a sneaker company and you want to check out what people are tweeting about regarding a big brand, you could type [#Nike] to get results.

Any Language

Use this field if you want to get information about tweets written in one of the more than 50 foreign languages listed.

This may help if you are analyzing the foreign market in your industry or have a brand that operates in multiple countries.

How to Use The “Account” Field When Doing Advanced Search

Formerly named “People,” under “Accounts,” there are three different fields that can alter your search results depending on what you type.

Just type the username of a person with a Twitter account, or several usernames separated by a comma.

Then choose whether you want results exclusively sent by that account, to that account, or that mentioned that account.

Just remember to use proper Twitter accounts that begin with @.

How to Use The “Engagement” Field When Doing Advanced Twitter Search

The depth of engagement is measured by the number of retweets, replies, and likes a tweet receives. A tweet that generates thousands of retweets is doubtless an engaging tweet.

The way to do this is to specify the minimum number of retweets, likes, or replies in the search field or any combination of those three.

By doing this, you will absolutely weed out posts that add more noise and don’t add value to your search.

How to Use The “Dates” Field When Doing Advanced Twitter Search

If you remember a tweet but you can’t remember the exact date it was tweeted, the “Date” option has you covered, that is if you can remember the period within which the tweet was sent out.

Under the heading ‘Dates,’ you can select a specific date range and only receive search results from that time.

You’ll simply enter the date ranges, and the relevant tweet will be displayed, regardless of how long ago the phrase was tweeted.

One thing to note is that the first tweet was sent on March 21, 2006, so the system would default to that date if you entered a date earlier than that one.

Don’t Forget the Search Operators

These will help you weed out the useful tweets from the pics of what users had for dinner.

How to Understand Advanced Twitter Search Results

Twitter has an algorithm that will determine how you get the search results and in which order.

While what drives the nature and order of Twitter’s Advanced Search results isn’t known with certainty, several metrics may play a role.

These may include tweets that have elicited a certain level of reaction, relevance, time-lapse of the tweet, and location, among others.

After you have typed in all the information to generate a search, the results page will include the headings:

Top: This is a list of what Twitter considers the tweets that are triggering the most reaction.

People: These are the accounts of users based on your search criteria.

Videos: These are tweets with links to videos on other websites.

After reviewing the results, you may realize that you need to refine or broaden your search. You can do that by simply going back to the Advanced Search page.

Depending on the level of analysis needed, there are also paid programs that can email alerts based on hashtags, business name, etc. An example of this is Twilert, which is basically a Google Alerts for Twitter.

There are several ways to use this Twitter search feature to drive revenue through lead generation.

One way to do this is to assess what customers complain about regarding your competitors’ products. Then you can use this information to improve your product.

The Advanced Search feature can provide your business with valuable demographic information, information about prospects in your local area, and lead generation opportunities.

When your goal is lead generation, you will want to focus on these areas:

People.

Places.

Dates.

Other.

1. Competitor mentions

2. Product or service mentions

Type in the name of a product or service you offer under Words. If you sell computer security software, you would type that phrase + a “?”.

You can use the results to create a database of people who have made inquiries about products or services you sell and reach out to them about what you can offer.

But they are effective because they identify people who are unhappy with products or services they purchased or people who have expressed interest in those products and services but may have not yet engaged with a seller.

But there are other ways.

Check out these 8 Terrific Tips to Optimize a Twitter Business or Brand Profile.

Another Tool in the Your Marketing Tool Chest

Twitter Advanced Search helps you to analyze your market, judge how your competitors are doing based on positive or negative sentiment, and improve your geotargeting based on the number of tweets in specific locations.

It may take a few searches before you understand all the ways you can manipulate the results.

But once you do, you can hone in on the information you need to help boost your Twitter marketing efforts.

More Resources:

Image Credits

How To Use Google Classroom: All You Need To Know

We can go on endlessly about how technology bridges gaps and enables access for everyone, especially in ways that would simply be impossible using an age-old conventional method, but we are well past the times when we could simply be appreciative. Not because there is nothing to appreciate, in fact, it’s quite the opposite these days. But ever since the COVID-19 pandemic, this technology is not an option for us as much as a necessity, compelling even the most technologically amateurish of us to get on-board to function and stay relevant.

But as challenging as this brief sounds, you can believe us when we say that despite how intimidating it looks, Learning Management Systems (LMS) like Google Classroom are here not just for big life-changing things but also for the small convenient changes that one will learn to appreciate as they get familiar with the Google Classroom software.

Related: How to get Bitmoji in Google Classroom

Who can use Google Classroom?

We are not exaggerating about the convenience aspect of Google Classroom. Anyone who has a personal Google account or is using G Suite for Education/NonProfits can access Google Classroom as a free service. In fact, the use of Google Classroom has doubled during the pandemic, with countries across the world adopting this LMS to ensure that education is not put on hold.

Related: How To Make An Interactive Bitmoji Google Classroom Scene For FREE

Ways to use Google Classroom

Streamline Classroom Management

With Google Classroom, you’re not just getting the Classroom interface, but also an integration with other handy Google features like Docs, Drive, and Calendar. This makes all your classroom activities inter-linked in an efficient manner and basically creates short-cuts for activities that would otherwise have been tedious. For example, when you create an assignment in the Classwork section, you can attach a Google Drive file that students can also access using Drive and the due date you assign will be automatically added to your Google Calendar.

Related: How to make PDF editable in Google Classroom

Effective and reliable communication

Classroom’s Stream is similar to having your own personal Social Media platform from where you can communicate with students and update them about all activities. Whether it’s announcements or assignments, you can post about it on stream and trust Google to notify all the necessary parties. In fact, you can customize the list of people who see the post which makes classroom management easy as well. Classroom also enables you to check-in with students privately, answer their questions, and offer support.

Feedback assignments and assessments

With Google Classroom, you will spend less time grading and more time providing quality feedback on your student’s work. With resources like Google Forms for quizzes, Google automatically grades your students based on a marking system that you have set. Now, you will have resources at hand that will help you to focus on things that truly matter.

Organize, distribute and collect

The convenience of being able to organize, distribute, and collect classroom-related anything using Google Classroom is fairly unbeatable. In fact, even when schools resume again, this might be a method you will consider permanently adopting.  The convenience of posting an assignment to multiple classes or modifying and reusing assignments from year to year is one of the many blessings of an LMS like Google Classroom. If your students have regular access to devices, Google Classroom can help you avoid some trips to the photocopier and cut down on the amount of paper that goes into the process of teaching and learning.

How to get started with Google classroom

Pretty much like just about everything else Google, you must have a Google Account to get started on Google Classroom. There is a likelihood that your school has its own G-suite account in which case you must use it instead of your personal Google Account. Before doing anything else, you will need to know how to create a Google Classroom first. Follow our tutorial on how to create a Google Classroom here

Great! now that your Google Classroom is ready and you have more or less figured out its various functionalities, let’s understand how to navigate around them and use

How to navigate Google Classroom

There is, of course, more to Google Classroom then the four major tabs (though they do form an extremely important part) and how to assign work. Let’s delve deeper into the features.

Class Settings – General

How to make changes to your Class code Settings

How to make changes to Stream Settings

A major amount of engagement happens in the Stream section between teachers and students. You can govern how you want these interactions to occur by choosing any of the options that suit you.

How to make changes to Classwork notifications

You can decide how you want to receive your Classwork notifications and even choose to altogether hide the notifications by selecting one of the options from the dropdown menu. Finally, you can also choose to check or uncheck the Show deleted items, depending on whether you want your students to see it.

Class Settings – Marking

Class Stream

The Stream is the most social and active space in Google Classroom. This is where you can post assignments and announcements. If you’ve enabled it to be so that Students can share resources or ask questions, even that is possible. Any assignments created on the Classwork tab will be automatically announced on the Stream.

How to use Class Stream

Classwork

How to use the create function in Classwork

The create function is absolutely crucial to preparing assignments in multiple formats. We have a comprehensive tutorial on how to use the Create function here.

Marks

The Marks section is where a maximum amount of your effort pays off. This is the section where you will be able to see student’s assignment submissions and mark them. Google will automatically update these marks and inform the students regarding their performance from here.

How to mark assignments

How to optimize features on Google Classroom

By now you have more or less learned how to use all the basic functions of Google Classroom if you’ve been with us on this tutorial so far. Give yourself a major pat on the back for this. We are very proud of your commitment. Now, let’s explore how to optimize some of these super thrifty features.

Set Class Templates

The beauty of an LMS like Google Classroom is that you can simply templatize the classrooms that you liked. So instead of recreating new classrooms, you can choose simply copy them and templatize them like this:

Now, you can copy this template to create a new class whenever you want!

Control your notifications

Now, you will see one of the most comprehensive and thorough notifications settings that you can customize however you want simply by enabling or disabling whichever notifications you require.

Archive and delete classrooms

No one wants a cluttered Classroom page and this calls for being able to efficiently manage your Classrooms whether it’s about cleaning them out, archiving them, or deleting them. Here’s a super helpful tutorial on how you can do it.

Bring Bitmojis into the picture

A virtual interactive Bitmoji Classroom will take your teaching experience to the next level. It makes communication with students and parents fun, thus becoming a great engagement tool in your teaching arsenal. You can check out this article on how to create an interactive Bitmoji Google Classroom.

Bonus Tip

Request for features in Google Classroom

The more popular a feature request, the more likely it’ll be implemented. So make sure that you send feedback and send it often!

Well, we certainly hope that you found this tutorial helpful. Let us know if we can help you with anything. Take care and stay safe!

RELATED:

What To Know About Google’s New Free Vpn

VPN by Google One

The first thing to know is that “free” is a bit of a misnomer: It’s free when you buy the Google One plan at the 2TB or higher tier. Which is another way of saying it’ll cost you $9.99 per month.

Still, the Google One plan in question includes a truly huge amount of online storage space across Drive, Gmail, and Google Photos, as well as a handful of perks that includes 10% back in store credit for Google Store purchases and Pro Sessions, which offer access to Google experts for one-on-one online sessions about internet safety and VPN use (limited to US, UK, and Canada).

The other thing to know is that this Google VPN isn’t, technically speaking, “new,” either: Google Fi cellular subscribers have already been able to use it, at least on their Android smartphones.

Google’s move to roll the VPN service out to those on its Google One plan looks like a sign that they have faith in its quality. They’ve presumably had time to work out the kinks with their Google Fi audience.

Features and Security

Okay, so it’s only kind of free and kind of new. But we can definitely confirm that this is 100% a VPN. It offers the basics of any VPN service:

An encrypted connection for streaming, downloading, and browsing

Prevents hacking attempts while on unsecured networks (malls, coffee shops)

Masks your IP address and location from third-party entities

Security is a top concern for many people, but particularly for anyone interested in a VPN, since that’s the whole point. Google knows this, and has stated it has a zero-logging policy. Google says it will “never use the VPN connection to track, log, or sell your browsing activity.” The company does concede that “Some minimum logging is performed to ensure quality of service, but your network traffic or IP associated with the VPN is never logged.”

Plus, their client’s libraries are open sourced, so the code will be available to all, and the company has scheduled an independent audit for its end to end systems, planned for some time early next year.

Is Google’s VPN for you?

As many as 25% of all internet users accessed a VPN within the last month of 2023, and with the rise of remote work across 2023, that percentage has likely jumped even higher since.

Google’s reputation for quality and security is strong, even if their VPN has a two big drawbacks: First, there’s the fact that you’ll need Android to run it, at least for now, and second, it’s only available as a perk for those paying $9.99 a month for the Google One storage plan.

Not that there’s anything wrong with paying for a VPN. Not all VPNs are equally secure, and if you want to truly trust one, you’ll likely want to pay for that extra peace of mind.

If you’re in the market for the best of the best VPNs, take a look at our roundup of the top services available today with our quick guide, or just check out the comparison table below to see how their pricing and features compare to those of Google’s VPN offering.

Update the detailed information about All You Need To Know About Google’s Project Soli on the Hatcungthantuong.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!