Trending December 2023 # Cryptojacking The New Browser Mining Threat You Need To Know About # Suggested January 2024 # Top 13 Popular

You are reading the article Cryptojacking The New Browser Mining Threat You Need To Know About updated in December 2023 on the website Hatcungthantuong.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Cryptojacking The New Browser Mining Threat You Need To Know About

If you are struggling with a slow PC or internet connection, do not just blame the vendor or service provider yet because you may be a victim of a new trick used by the hackers called as browser Cryptojacking.

The evolution of Cryptojacking is attributed to the soaring interest in Cryptocurrencies for the past few months. Look at Bitcoin for the past few months or so, and its value has gone up by more than 1,000%. This has attracted attention from hackers as well and so has given birth to dangerous practices such as Crytptojacking.

What is Cryptojacking

Emergence

Before we understand what Cryptojacking is, first let us know about Cryptomining.

Cryptomining or Cryptocurrency Mining is the process by which a cryptocurrency comes into existence, using the blockchain technology. Cryptomining also lets new cryptocurrency coins get released on the market. Mining is carried out by certain peers of the cryptocurrency network who compete (individually or in groups) in solving a difficult mathematical problem, called proof-of-work

Although Coinhive was meant to be legitimate, its concept led to the emergence of similar software, which is now used by cyber criminals for Cryptomining abuse or Cryptojacking.

In short, Cryptojacking is the technique of hijacking browsers for mining cryptocurrency, without user consent. Delivering cryptocurrency miners through malware is a known fact, but mining cryptocurrency when accessing a webpage is new and has led to the attackers abusing for personal gains.

Cryptojacking is not a traditional malware

Cryptojacking does not harm your PC like traditional malware or ransomware act. Neither does it store or lock down anything on the hard drive. Hence, it in itself is not a malware as such, but it can certainly be introduced into your system using malware.

Cryptojacking, similar to malware, uses your PC resources without your permission. It can cause the PC and browsers to work extremely sluggish, drain the battery and raise the electricity bills without you even realizing the same.

Consequences of Cryptojacking

Cryptojacking can affect Windows OS as well as Mac OSX & Android. There have been numerous cases of Cryptojacking reported recently. Some of the common types include the following:

Websites using Coinhive deliberately

Pirates Bay was one of the first major player guilty of using Coinhive deliberately. The issue was that it was done transparently, without the visitors’ consent. Once the crypto mining script was discovered, Pirate Bay issued a statement mentioning that it was testing this solution as an alternative revenue source. Researchers fear that there are many such websites which are already using Coinhive without visitor’s consent.

Coinhive injected into compromised websites

Researchers identified compromised WordPress and Magento websites that had Coinhive, or a similar JavaScript-based miner injected into them.

Read: What to do if Coinhive crypto-mining script infects your website.

Cryptojacking using browser extensions

In-browser cryptojacking uses JavaScript on a web page to mine for cryptocurrencies. JavaScript runs on just about every website you visit, so the JavaScript code responsible for in-browser mining does not need to be installed. As soon as you load the page, and the in-browser mining code just runs.

There are cases of web browser extensions embedding Coinhive where cryptomining software run in the background and mined “Monero” while the browser was running -and not only when visiting a specific website.

Cryptojacking with malware

This is another type of abuse where Coinhive is being deployed alongside malware through a fake Java update.

Cryptojacking in Android devices

An Android variant of Coinhive has been detected targeting Russian users. This trend suggests that Cryptojacking is expanding to mobile applications as well.

Typosquatted domains embedding Coinhive

Cryptojacking through cloud services

Cybercriminals are hijacking unsecured Cloud platforms and using them to mine cryptocurrency.

Microsoft has notified of variations of Coinhive being spotted in the wild. Such a development indicates that Coinhive’s success has motivated the emergence of similar software by other parties that want to join this market.

Minr – A Coinhive alternative emerges

The use of Coinhive by legitimate users has in general been on decline owing to the unpopularity that it has been receiving since its launch. Coinhive is also easily traceable which is another fact that its prospective admirers are not using it on their website.

So, as an alternative, the team of Minr, has developed an option of “obfuscation”, which makes it much more difficult to track the miner. This facilitates the hidden use of the tool. This feature is so effective that it hides the code even for the popular anti-malware tool Malwarebytes.

How to stay protected from Cryptojacking

Cryptocurrencies & Blockchain technology is taking over the world. It is creating an impact on the global economy and causing technology disruptions as well. Everyone has started focusing on such a lucrative market – and this includes website hackers too. As returns increase, we should expect that such technologies will be misused.

Being observant while browsing is something that you have to practice regularly if you want to stay away from Cryptojacking frauds. You are on a compromised website if you see a sudden spike in memory usage and sluggish performance on your PC. The best action here is to stop the process by exiting the website, and not visit it again.

You can use Anti-WebMiner programs as one of the precautions.

Use a browser extension that blocks websites from using your CPU for crypto mining. If you use Chrome browser, then Install minerBlock extension. It is a useful extension for Chrome browser to block web-based cryptocurrency miners all over the web. Apart from CoinHive it even blocks Minr.

Another necessary precaution is to update your Hosts file to block chúng tôi and other domains that are known to enable unauthorized mining. Remember, Cryptojacking is still growing with more and more people drawing towards Cryptocurrencies, so your blocklists will have to be regularly updated.

Prevent CoinHive from infecting your website

Don’t use any NULL templates or plugins on your website/forum.

Keep your CMS updated to the latest version.

Update your hosting software regularly (PHP, Database, etc.).

Secure your website with web security providers like Sucuri, Cloudflare, Wordfence, etc.

Take basic precautions to secure your blog.

Stay alert, stay safe!

You're reading Cryptojacking The New Browser Mining Threat You Need To Know About

Bert Explained: What You Need To Know About Google’s New Algorithm

Google’s newest algorithmic update, BERT, helps Google understand natural language better, particularly in conversational search.

BERT will impact around 10% of queries. It will also impact organic rankings and featured snippets. So this is no small change!

But did you know that BERT is not just any algorithmic update, but also a research paper and machine learning natural language processing framework?

In fact, in the year preceding its implementation, BERT has caused a frenetic storm of activity in production search.

On November 20, I moderated a Search Engine Journal webinar presented by Dawn Anderson, Managing Director at Bertey.

Anderson explained what Google’s BERT really is and how it works, how it will impact search, and whether you can try to optimize your content for it.

Here’s a recap of the webinar presentation.

What Is BERT in Search?

BERT, which stands for Bidirectional Encoder Representations from Transformers, is actually many things.

It’s more popularly known as a Google search algorithm ingredient /tool/framework called Google BERT which aims to help Search better understand the nuance and context of words in Searches and better match those queries with helpful results.

BERT is also an open-source research project and academic paper. First published in October 2023 as BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, the paper was authored by Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova.

Additionally, BERT is a natural language processing NLP framework that Google produced and then open-sourced so that the whole natural language processing research field could actually get better at natural language understanding overall.

You’ll probably find that most mentions of BERT online are NOT about the Google BERT update.

There are lots of actual papers about BERT being carried out by other researchers that aren’t using what you would consider as the Google BERT algorithm update.

BERT has dramatically accelerated natural language understanding NLU more than anything and Google’s move to open source BERT has probably changed natural language processing forever.

The machine learning ML and NLP communities are very excited about BERT as it takes a huge amount of heavy lifting out of their being able to carry out research in natural language. It has been pre-trained on a lot of words – and on the whole of the English Wikipedia 2,500 million words.

Vanilla BERT provides a pre-trained starting point layer for neural networks in machine learning and natural language diverse tasks.

While BERT has been pre-trained on Wikipedia, it is fine-tuned on questions and answers datasets.

One of those question-and-answer data sets it can be fine-tuned on is called MS MARCO: A Human Generated MAchine Reading COmprehension Dataset built and open-sourced by Microsoft.

There are real Bing questions and answers (anonymized queries from real Bing users) that’s been built into a dataset with questions and answers for ML and NLP researchers to fine-tune and then they actually compete with each other to build the best model.

Researchers also compete over Natural Language Understanding with SQuAD (Stanford Question Answering Dataset). BERT now even beats the human reasoning benchmark on SQuAD.

Lots of the major AI companies are also building BERT versions:

Microsoft extends on BERT with MT-DNN (Multi-Task Deep Neural Network).

RoBERTa from Facebook.

SuperGLUE Benchmark was created because the original GLUE Benchmark became too easy.

What Challenges Does BERT Help to Solve?

There are things that we humans understand easily that machines don’t really understand at all including search engines.

The Problem with Words

The problem with words is that they’re everywhere. More and more content is out there

Words are problematic because plenty of them are ambiguous, polysemous, and synonymous.

Bert is designed to help solve ambiguous sentences and phrases that are made up of lots and lots of words with multiple meanings.

Ambiguity & Polysemy

Almost every other word in the English language has multiple meanings. In spoken word, it is even worse because of homophones and prosody.

For instance, “four candles” and “fork handles” for those with an English accent. Another example: comedians’ jokes are mostly based on the play on words because words are very easy to misinterpret.

It’s not very challenging for us humans because we have common sense and context so we can understand all the other words that surround the context of the situation or the conversation – but search engines and machines don’t.

This does not bode well for conversational search into the future.

Word’s Context

“The meaning of a word is its use in a language.” – Ludwig Wittgenstein, Philosopher, 1953

Basically, this means that a word has no meaning unless it’s used in a particular context.

The meaning of a word changes literally as a sentence develops due to the multiple parts of speech a word could be in a given context.

Case in point, we can see in just the short sentence “I like the way that looks like the other one.” alone using the Stanford Part-of-Speech Tagger that the word “like” is considered to be two separate parts of speech (POS).

The word “like” may be used as different parts of speech including verb, noun, and adjective.

So literally, the word “like” has no meaning because it can mean whatever surrounds it. The context of “like” changes according to the meanings of the words that surround it.

The longer the sentence is, the harder it is to keep track of all the different parts of speech within the sentence.

On NLR & NLU

Natural Language Recognition Is NOT Understanding

Natural language understanding requires an understanding of context and common sense reasoning. This is VERY challenging for machines but largely straightforward for humans.

Natural Language Understanding Is Not Structured Data

Structured data helps to disambiguate but what about the hot mess in between?

Not Everyone or Thing Is Mapped to the Knowledge Graph

There will still be lots of gaps to fill. Here’s an example.

As you can see here, we have all these entities and the relationships between them. This is where NLU comes in as it is tasked to help search engines fill in the gaps between named entities.

How Can Search Engines Fill in the Gaps Between Named Entities?

Natural Language Disambiguation

“You shall know a word by the company it keeps.” – John Rupert Firth, Linguist, 1957

Words that live together are strongly connected:

Co-occurrence.

Co-occurrence provides context.

Co-occurrence changes a word’s meaning.

Words that share similar neighbors are also strongly connected.

Similarity and relatedness.

…and build vector space models for word embeddings.

The NLP models learn the weights of the similarity and relatedness distances. But even if we understand the entity (thing) itself, we need to understand word’s context

On their own, single words have no semantic meaning so they need text cohesion. Cohesion is the grammatical and lexical linking within a text or sentence that holds a text together and gives it meaning.

Semantic context matters. Without surrounding words, the word “bucket” could mean anything in a sentence.

He kicked the bucket.

I have yet to cross that off my bucket list.

The bucket was filled with water.

An important part of this is part-of-speech (POS) tagging:

How BERT Works

Past language models (such as Word2Vec and Glove2Vec) built context-free word embeddings. BERT, on the other hand, provides “context”.

To better understand how BERT works, let’s look at what the acronym stands for.

B: Bi-directional

Previously all language models (i.e., Skip-gram and Continuous Bag of Words) were uni-directional so they could only move the context window in one direction – a moving window of “n” words (either left or right of a target word) to understand word’s context.

Most language modelers are uni-directional. They can traverse over the word’s context window from only left to right or right to left. Only in one direction, but not both at the same time.

BERT is different. BERT uses bi-directional language modeling (which is a FIRST).

BERT can see the WHOLE sentence on either side of a word contextual language modeling and all of the words almost at once.

ER: Encoder Representations

What gets encoded is decoded. It’s an in-and-out mechanism.

T: Transformers

BERT uses “transformers” and “masked language modeling”.

One of the big issues with natural language understanding in the past has been not being able to understand in what context a word is referring to.

Pronouns, for instance. It’s very easy to lose track of who’s somebody’s talking about in a conversation. Even humans can struggle to keep track of who somebody’s being referred to in a conversation all the time.

That’s kind of similar for search engines, but they struggle to keep track of when you say he, they, she, we, it, etc.

So transformers’ attention part of this actually focuses on the pronouns and all the words’ meanings that go together to try and tie back who’s being spoken to or what is being spoken about in any given context.

Masked language modeling stops the target word from seeing itself. The mask is needed because it prevents the word that’s under focus from actually seeing itself.

When the mask is in place, BERT just guesses at what the missing word is. It’s part of the fine-tuning process as well.

What Types of Natural Language Tasks Does BERT Help With?

BERT will help with things like:

Named entity determination.

Textual entailment next sentence prediction.

Coreference resolution.

Question answering.

Word sense disambiguation.

Automatic summarization.

Polysemy resolution.

How BERT Will Impact Search BERT Will Help Google to Better Understand Human Language

BERT’s understanding of the nuances of human language is going to make a massive difference as to how Google interprets queries because people are searching obviously with longer, questioning queries.

BERT Will Help Scale Conversational Search

BERT will also have a huge impact on voice search (as an alternative to problem-plagued Pygmalion).

Expect Big Leaps for International SEO

BERT has this mono-linguistic to multi-linguistic ability because a lot of patterns in one language do translate into other languages.

There is a possibility to transfer a lot of the learnings to different languages even though it doesn’t necessarily understand the language itself fully.

Google Will Better Understand ‘Contextual Nuance’ & Ambiguous Queries

A lot of people have been complaining that their rankings have been impacted.

But I think that that’s probably more because Google in some way got better at understanding the nuanced context of queries and the nuanced context of content.

So perhaps, Google will be better able to understand contextual nuance and ambiguous queries.

Should You (or Can You) Optimize Your Content for BERT?

Probably not.

Google BERT is a framework of better understanding. It doesn’t judge content per se. It just better understands what’s out there.

For instance, Google Bert might suddenly understand more and maybe there are pages out there that are over-optimized that suddenly might be impacted by something else like Panda because Google’s BERT suddenly realized that a particular page wasn’t that relevant for something.

That’s not saying that you’re optimizing for BERT, you’re probably better off just writing natural in the first place.

[Video Recap] BERT Explained: What You Need to Know About Google’s New Algorithm

Watch the video recap of the webinar presentation.

Or check out the SlideShare below.

Image Credits

All screenshots taken by author, November 2023

Join Us For Our Next Webinar! KPIs, Metrics & Benchmarks That Matter For SEO Success In 2023

Reserve my Seat

Everything You Need To Know About Scikit

Introduction

Scikit-learn is one Python library we all inevitably turn to when we’re building machine learning models. I’ve built countless models using this wonderful library and I’m sure all of you must have as well.

There’s no question – scikit-learn provides handy tools with easy-to-read syntax. Among the pantheon of popular Python libraries, scikit-learn ranks in the top echelon along with Pandas and NumPy. These three Python libraries provide a complete solution to various steps of the machine learning pipeline.

I love the clean, uniform code and functions that scikit-learn provides. It makes it really easy to use other techniques once we have mastered one. The excellent documentation is the icing on the cake as it makes a lot of beginners self-sufficient with building machine learning models.

The developers behind scikit-learn have come up with a new version (v0.22) that packs in some major updates. I’ll unpack these features for you in this article and showcase what’s under the hood through Python code.

Note: Looking to learn Python from scratch? This free course is the perfect starting point!

Table of Contents

Getting to Know Scikit-Learn

A Brief History of Scikit-Learn

Scikit-Learn v0.22 Updates (with Python implementation)

Stacking Classifier and Regressor

Permutation-Based Feature Importance

Multi-class Support for ROC-AUC

kNN-Based Imputation

Tree Pruning

Getting to Know Scikit-Learn

This library is built upon the SciPy (Scientific Python) library that you need to install before you can use scikit-learn. It is licensed under a permissive simplified BSD license and is distributed under many Linux distributions, encouraging academic and commercial use.

Overall, scikit-learn uses the following libraries behind the scenes:

NumPy: n-dimensional array package

SciPy: Scientific computing Library

Matplotlib:  Plotting Library

iPython: Interactive python (for Jupyter Notebook support)

SymPy: Symbolic mathematics

Pandas: Data structures, analysis, and manipulation

Lately, scikit-learn has reorganized and restructured its functions & packages into six main modules:

Classification: Identifying which category an object belongs to

Regression: Predicting a continuous-valued attribute associated with an object

Clustering: For grouping unlabeled data

Dimensionality Reduction: Reducing the number of random variables to consider

Model Selection: Comparing, validating and choosing parameters and models

Preprocessing: Feature extraction and normalization

scikit-learn provides the functionality to perform all the steps from preprocessing, model building, selecting the right model, hyperparameter tuning, to frameworks for interpreting machine learning models.

Scikit-learn Modules (Source: Scikit-learn Homepage)

A Brief History of Scikit-learn

Scikit-learn has come a long way from when it started back in 2007 as scikits.learn. Here’s a cool trivia for you – scikit-learn was a Google Summer of Code project by David Cournapeau!

This was taken over and rewritten by Fabian Pedregosa, Gael Varoquaux, Alexandre Gramfort and Vincent Michel, all from the French Institute for Research in Computer Science and Automation and its first public release took place in 2010.

Since then, it has added a lot of features and survived the test of time as the most popular open-source machine learning library across languages and frameworks. The below infographic, prepared by our team, illustrates a brief timeline of all the scikit-learn features along with their version number:

The above infographics show the release of features since its inception as a public library for implementing Machine Learning Algorithms from 2010 to 2023

Today, Scikit-learn is being used by organizations across the globe, including the likes of Spotify, JP Morgan, chúng tôi Evernote, and many more. You can find the complete list here with testimonials I believe this is just the tip of the iceberg when it comes to this library’s popularity as there will a lot of small and big companies using scikit-learn at some stage of prototyping models.

The latest version of scikit-learn, v0.22, has more than 20 active contributors today. v0.22 has added some excellent features to its arsenal that provide resolutions for some major existing pain points along with some fresh features which were available in other libraries but often caused package conflicts.

We will cover them in detail here and also dive into how to implement them in Python.

Scikit-Learn v0.22 Updates

Along with bug fixes and performance improvements, here are some new features that are included in scikit-learn’s latest version.

Stacking Classifier & Regressor

Stacking is an ensemble learning technique that uses predictions from multiple models (for example, decision tree, KNN or SVM) to build a new model.

This model is used for making predictions on the test set. Below is a step-wise explanation I’ve taken from this excellent article on ensemble learning for a simple stacked ensemble:

The base model (in this case, decision tree) is then fitted on the whole train dataset

This model is used to make final predictions on the test prediction set

The mlxtend library provides an API to implement Stacking in Python. Now, sklearn, with its familiar API can do the same and it’s pretty intuitive as you will see in the demo below. You can either import StackingRegressor & StackingClassifier depending on your use case:

from

sklearn.linear_model

import

LogisticRegression

from sklearn.ensemble import RandomForestClassifier from chúng tôi import DecisionTreeClassifier

from

sklearn.ensemble

import

StackingClassifier

from

sklearn.model_selection

import

train_test_split

X

,

y

=

load_iris

(

return_X_y

=

True

)

estimators

=

[

(

'rf'

,

RandomForestClassifier

(

n_estimators

=

10

,

random_state

=

42

)),

(

'dt'

,

DecisionTreeClassifier

(

random_state

=

42

)

)

]

clf

=

StackingClassifier

(

estimators

=

estimators

,

final_estimator

=

LogisticRegression

()

)

X_train

,

X_test

,

y_train

,

y_test

=

train_test_split

(

X

,

y

,

stratify

=

y

,

random_state

=

42

)

clf

.

fit

(

X_train

,

y_train

)

.

score

(

X_test

,

y_test

)

Permutation-Based Feature Importance

As the name suggests, this technique provides a way to assign importance to each feature by permuting each feature and capturing the drop in performance.

But what does permuting mean here? Let us understand this using an example.

Let’s say we are trying to predict house prices and have only 2 features to work with:

LotArea – (Sq Feet area of the house)

YrSold (Year when it was sold)

The test data has just 10 rows as shown below:

Next, we fit a simple decision tree model and get an R-Squared value of 0.78. We pick a feature, say LotArea, and shuffle it keeping all the other columns as they were:

Next, we calculate the R-Squared once more and it comes out to be 0.74. We take the difference or ratio between the 2 (0.78/0.74 or 0.78-0.74), repeat the above steps, and take the average to represent the importance of the LotArea feature.

We can perform similar steps for all the other features to get the relative importance of each feature. Since we are using the test set here to evaluate the importance values, only the features that help the model generalize better will fare better.

Earlier, we had to implement this from scratch or import packages such as ELI5. Now, Sklearn has an inbuilt facility to do permutation-based feature importance. Let’s get into the code to see how we can visualize this:



As you can see in the above box plot, there are 3 features that are relatively more important than the other 4. You can try this with any model, which makes it a model agnostic interpretability technique. You can read more about this machine learning interpretability concept here.

Multiclass Support for ROC-AUC

The ROC-AUC score for binary classification is super useful especially when it comes to imbalanced datasets. However, there was no support for Multi-Class classification till now and we had to manually code to do this. In order to use the ROC-AUC score for multi-class/multi-label classification, we would need to binarize the target first.

Currently, sklearn has support for two strategies in order to achieve this:

from sklearn.datasets import load_iris  from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import roc_auc_score X, y = load_iris(return_X_y=True) rf = RandomForestClassifier(random_state=44, max_depth=2) rf.fit(X,y) print(roc_auc_score(y, rf.predict_proba(X), multi_class='ovo'))

Also, there is a new plotting API that makes it super easy to plot and compare ROC-AUC curves from different machine learning models. Let’s see a quick demo:

from

sklearn.model_selection

import

train_test_split

from

sklearn.svm

import

SVC

from

sklearn.metrics

import

plot_roc_curve

from

sklearn.ensemble

import

RandomForestClassifier

from

sklearn.datasets

import

make_classification

import

matplotlib.pyplot

as

plt

X

,

y

=

make_classification

(

random_state

=5

)

X_train

,

X_test

,

y_train

,

y_test

=

train_test_split

(

X

,

y

,

random_state

=

42

)

svc

=

SVC

(

random_state

=

42

)

svc

.

fit

(

X_train

,

y_train

)

rfc

=

RandomForestClassifier

(

random_state

=

42

)

rfc

.

fit

(

X_train

,

y_train

)

svc_disp

=

plot_roc_curve

(

svc

,

X_test

,

y_test

)

rfc_disp

=

plot_roc_curve

(

rfc

,

X_test

,

y_test

,

ax

=

svc_disp

.

ax_

)

rfc_disp

.

figure_

.

suptitle

(

"ROC curve comparison"

)

plt

.

show

()

In the above figure, we have a comparison of two different machine learning models, namely Support Vector Classifier & Random Forest. Similarly, you can plot the AUC-ROC curve for more machine learning models and compare their performance.

kNN-Based Imputation

In kNN based imputation method, the missing values of an attribute are imputed using the attributes that are most similar to the attribute whose values are missing. The assumption behind using kNN for missing values is that a point value can be approximated by the values of the points that are closest to it, based on other variables.

The k-nearest neighbor can predict both qualitative & quantitative attributes

Creation of predictive machine learning model for each attribute with missing data is not required

Correlation structure of the data is taken into consideration

Scikit-learn supports kNN-based imputation using the Euclidean distance method. Let’s see a quick demo:

import

numpy

as

np

from

sklearn.impute

import

KNNImputer

X

=

[[4

,

6

,

np

.

nan

],

[

3

,

4

,

3

],

[

np

.

nan

,

6

,

5

],

[

8

,

8

,

9

]]

imputer

=

KNNImputer

(

n_neighbors

=

2

)

print

(

imputer

.

fit_transform

(

X

))

You can read about how kNN works in comprehensive detail here.

Tree Pruning

In basic terms, pruning is a technique we use to reduce the size of decision trees thereby avoiding overfitting. This also extends to other tree-based algorithms such as Random Forests and Gradient Boosting. These tree-based machine learning methods provide parameters such as min_samples_leaf and max_depth to prevent a tree from overfitting.

Pruning provides another option to control the size of a tree. XGBoost & LightGBM have pruning integrated into their implementation. However, a feature to manually prune trees has been long overdue in Scikit-learn (R already provides a similar facility as a part of the rpart package).

In its latest version, Scikit-learn provides this pruning functionality making it possible to control overfitting in most tree-based estimators once the trees are built. For details on how and why pruning is done, you can go through this excellent tutorial on tree-based methods by Sunil. Let’s look at a quick demo now:

from sklearn.ensemble import RandomForestClassifier from sklearn.datasets import make_classification X

,

y

=

make_classification

(

random_state

=

0

)

rf

=

RandomForestClassifier

(

random_state

=

0

,

ccp_alpha

=

0

)

.

fit

(

X

,

y

)

print

(

"Average number of nodes without pruning

{:.1f}

"

.

format

(

np

.

mean

([

e

.

tree_

.

node_count

for

e

in

rf

.

estimators_

])))

rf

=

RandomForestClassifier

(

random_state

=

0

,

ccp_alpha

=

0.1

)

.

fit

(

X

,

y

)

print

(

"Average number of nodes with pruning

{:.1f}

"

.

format

(

np

.

mean

([

e

.

tree_

.

node_count

for

e

in

rf

.

estimators_

])))

End Notes

The scikit-learn package is the ultimate go-to library for building machine learning models. It is the first machine learning-focused library all newcomers lean on to guide them through their initial learning process. And even as a veteran, I often find myself using it to quickly test out a hypothesis or solution I have in mind.

The latest release definitely has some significant upgrades as we just saw. It’s definitely worth exploring on your own and experimenting using the base I have provided in this article.

Related

Everything You Need To Know About The Apple Watch Ultra

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

Bigger and tougher

The Apple Watch Ultra is bigger and more durable than the Apple Watch Series 8. Apple

Apple Watch Ultra is big. While the difference between the 45mm chassis of the Watch Series 8 and the 49mm Watch Ultra may not sound substantial, it should feel positively huge to standard Apple Watch users. Keep in mind: Apple expanded the case size by 1mm with the Watch Series 7, and that made a very noticeable difference. 

It’ll also have a much thicker chassis to incorporate new components, including a larger, louder speaker and a three-microphone array to improve voice clarity when making calls on the watch in less-than-ideal conditions. The Watch Ultra only comes in one hardware configuration, which includes cellular connectivity, so the expectation is that people will want to use the Watch Ultra to make calls at any time.

Presumably, the larger case also allowed Apple to give the Watch Ultra a bigger battery, which it estimates will last up to 36 hours on a single charge, or up to 60 hours with a low-power feature (available later in the fall).

The Apple Watch Ultra has a new “Action” button and a redesigned Digital Crown. Apple

The redesigned watch will also feature some design tweaks for the sake of durability, and usability in extreme conditions. The titanium case extends up to cover the edges of the sapphire crystal display to minimize cracked edges. The Watch Ultra is rated to operate on-wrist at temperatures as low as minus 4 degrees Fahrenheit, or as high as 131 F. It’s also IP6X and MIL-STD-810H certified—a military-grade durability rating used for many “rugged” tech products—indicating it’s prepared for some conditions, including rain, humidity, immersion in sand and dust, freezing, shock, and vibration, among others.

The buttons—yes, plural—are also getting an overhaul. The Digital Crown is larger and features grooved notches to make it easier to manipulate with a gloved hand. There’s also a second input: a large customizable “Action” button, that will allow you to start tracking workouts and perform other functions quickly. For example, triathletes can switch from running to cycling to swimming by simply pressing the button.

Last, but not least, Apple has created three new, activity-specific Apple Watch Ultra bands—the stitch-free hook-clasped Alpine Loop Band, the wetsuit-ready rubber Ocean Band, and the ultralight stretch Trail Loop band.

Built for survival

The new compass app allows you to set waypoints to help you find your way back to your camp or car. Apple

The Apple Watch Ultra offers some specialized features, many of which seem designed with safety and survival for hikers and climbers in mind. It uses a more precise “dual-frequency” GPS tracking that allows the watch to maintain tracking when you’re surrounded by tall structures or mountains.

As part of watchOS 9, the Watch Ultra will feature a redesigned version of the compass app that allows you to set waypoints, like your home, your camp, or your car, and allow you to orient yourself in relation to those locations. It will also be able to use a feature called backtrack that can use GPS to create a path retracing your steps in real-time. If you find yourself fully lost or hurt, the larger speaker can now play an ultra-loud 86-decibel siren that sends a distinctive SOS alarm (audible up to 600 feet away).

During the day, the display is brighter, up to 2000 Nits, which should make it easier to see regardless of glare. It also features a night mode, which turns the whole interface red, making it easier to see without interfering with your own night-adjusted vision.

Diver’s delight

The Apple Watch Ultra also seems to be an especially useful tool for divers. It’s waterproof up to 100 meters (WR100) and has an EN13319 depth gauge certification for diving accessories. Using a new depth app, you’ll be able to see your depth, time underwater, and max depth. In conjunction with an upcoming app, Oceanic+, the Watch Ultra will reportedly work as an effective dive computer, letting you plan and share dive routes and providing safety stop guidance.

Plus the best of Apple Watch Series 8 and watchOS 9

In addition to all of its exclusive changes, the Apple Watch Ultra will feature all of the upgrades in the upcoming Apple Watch Series 8. Most notably, that means new motion sensors that can detect if you get in a car crash and automatically call for help. They include a gyroscope and a highly sensitive accelerometer. Even the Watch Ultra’s built-in barometer plays a role in detecting crashes by detecting pressure changes typically associated with airbag deployment. There is also a temperature sensor that improves menstrual cycle tracking and enables ovulation tracking through the Health app (information Apple stressed is encrypted on the watch and only accessible with a user’s passcode/Touch ID/Face ID).

What does all this mean?

Apple will sell three activity-focused bands for the Apple Watch Ultra: The Trail Loop, the Alpine Loop, and the Ocean Band. Apple

At a glance, the people who should get most excited are iPhone-using fans of multisports smartwatches from brands like Garmin and Suunto. Those brands already make watches with many of these features, but their flagship watches cost even more than the $799 Apple Watch Ultra and don’t offer the same level of connectivity and convenience as an Apple Watch and iPhone working in sync.

The question remains: Is the Apple Watch Ultra worth buying? We will hopefully get our hands on the Apple Watch Ultra in the coming weeks, so we’ll have a full review with our thoughts on whether or not it’s worth that higher price. In the meantime, the Apple Watch Ultra is available on Amazon for $799.

What You Need To Know About Unc0Ver And The Fugu14 Untether

If you haven’t heard about the Fugu14 untether and how the unc0ver jailbreak now supports it, then you’d be inviting the age-old question of whether you live under a rock or not. And now that the latest version of AltStore (v1.4.8) can bundle the Fugu14 untether with the latest version of the semi-untethered unc0ver jailbreak tool, lots of people with compatible devices are jumping onboard.

But does Fugu14 really transform the semi-untethered unc0ver jailbreak into a fully-untethered jailbreak for the limited subset of devices that Fugu14 supports in its current form? The answer to that question appears to be more complicated than it should be at face value, but that’s something we intend to clear up in this piece.

Introduction

Firstly, we should mention right off the bat that Fugu14 in and of itself is an untethered jailbreak, albeit an incomplete one. It was developed by security researcher Linus Henze and later released in its incomplete form as a proof of concept so that jailbreak developers such as those in the checkra1n Team, Odyssey Team, and unc0ver Team could examine and attempt to make use of its inner workings for their own jailbreaks.

In its current form, the Fugu14 untether supports arm64e devices running iOS or iPadOS versions 14.4-14.5.1, and arm64e is a fancy way of saying iPhone XS or newer. Linus Henze has openly stated that Fugu14 could be updated to support older arm64 devices (iPhone X and older), however this would necessitate additional work.

A few minutes after Linus Henze released the Fugu14 untether and proof of concept for developers, Pwn20wnd updated the unc0ver jailbreak to v7.0.0 with preliminary support for it, which meant that users could install the Fugu14 untether manually. Several days later, Pwn20wnd updated the unc0ver jailbreak to v7.0.1. AltStore v1.4.8 was released around the same time with support for bundling the Fugu14 untether and the unc0ver jailbreak together.

Full stop right there. This is about where the confusion began, and at iDownloadBlog, we intend to clear up much of the confusion that this release and some of the verbiage used has kicked up.

Does Fugu14 truly untether the unc0ver jailbreak?

The unc0ver jailbreak is a semi-untethered jailbreak that gets installed by way of a side-loadable app, and always has been. Fugu14 is an untethered jailbreak in and of itself that makes use of a powerful untethered exploit. The two can be combined as of unc0ver v7.0.1 and AltStore v1.4.8 to create something interesting, but alas, it’s still technically a semi-untethered jailbreak as of now.

In its current form, the Fugu14 untether is being cleverly used as a mechanism to keep the unc0ver jailbreak app signed indefinitely. This means that you won’t be reliant on your computer’s AltServer installation and Mail app add-on to keep the unc0ver jailbreak app signed, and consequently, you can operate your jailbreak computer-free after you’ve jailbroken at least once.

A similar experience can be had with online signing services, as they can far surpass AltStore’s 7-day limit. But even signing services don’t last indefinitely, while the Fugu14 untether does.

Where things get confusing for some people is here: immediately following a reboot on the latest version of unc0ver with the Fugu14 untether, users will find themselves in a jailed state and will need to launch the unc0ver jailbreak app and re-jailbreak with it to return to a jailbroken state. A truly untethered jailbreak wouldn’t require this additional step by the end user, as the device would still be in a jailbroken state even after rebooting; A.K.A. persistence.

Okay, so now what?

When we said that unc0ver added “preliminary” support for Fugu14 in the original unc0ver v7.0.0 headline, that’s exactly how we meant it. Fugu14 doesn’t magically transform the unc0ver jailbreak into a full-fledged untethered jailbreak; at least not yet. But that could change in the future…

When Pwn20wnd added preliminary support for Fugu14 into the unc0ver jailbreak, it was more or less to offer enhanced functionality over the previous version in a shorter release window, which it did thanks to the indefinite app resigning. Pwn20wnd, along with other jailbreak developers in the checkra1n and Odyssey Teams have the opportunity to integrate the untether directly into their jailbreaks, which would make them fully untethered jailbreaks.

What we’re seeing right now is that the Fugu14 untether only supports a small subset of devices in its current form — that is, arm64e devices running iOS or iPadOS 14.4-14.5.1. Jailbreak developers likely want to expand support to more devices and more firmware versions, such as the iPhone X and older and all previous iOS & iPadOS 14 releases, and this will take a lot more time as it requires modifying the untether and performing lots of testing.

Once someone from one of the jailbreak teams gets the Fugu14 untether operating on more than just a small subset of devices, they’re likely to take the additional steps necessary to integrate the untether into their jailbreak tool. Why? Because it makes more sense to tackle it that way from a user experience perspective. Pre-releasing only partially-supported software convolutes everything and makes things less approachable by end users — especially new ones.

Should you install the untether?

While the Fugu14 untether isn’t yet offering a fully-untethered jailbreak by way of unc0ver, it still enhances unc0ver’ s capabilities by indefinitely re-signing the unc0ver jailbreak app, so it will do nothing but offer an additional lump of convenience for you.

Having said that, balance the risks with the benefits and make an informed choice.

Conclusion

So long story short, the Fugu14 jailbreak is a truly untethered jailbreak for arm64e devices running iOS or iPadOS 14.4-14.5.1, but unc0ver is still a semi-untethered jailbreak. Combining the two via AltStore as Pwn20wnd has merely enhances the unc0ver jailbreak’s semi-untetheredness into something that keeps you away from the computer and AltServer longer, which is still an impressive feat to have pulled off.

It’s likely that existing jailbreaks could become fully untethered (staying jailbroken after a reboot) with the Fugu14 untether in the future. It’s also likely that those jailbreaks could support the untether on more devices than just the arm64e variety. Currently, the untether is still very new, so it will require additional work for that to happen, which also translates to more time. Patience is key.

All You Need To Know About Google’s Project Soli

On Saturday, a couple of interesting Google Pixel 4 images started circulating on Twitter, showing an oval-shaped cutout on the right of the detached bezels for the Pixel 4 and Pixel 4XL.

A lot of theories have come to the fore since then, but the majority of them hint at Project Soli integration. We are not sure whether the Pixel 4 devices will feature Project Soli or not, but the technology itself is fascinating, to say the least.

So, without further ado, let’s get to it.

What is Project Soli?

Google has assigned some of its best engineers to create a piece of hardware which would allow the human hand to act as a universal input device. In smartphones and wearables, we currently use the touch panel to interact with the system. Soli aims to remove the middleman (touch panel) by letting you interact with your device using simple hand gestures, without making contact.

How does it work?

The Soli sensor emits electromagnetic waves in a broad beam. The objects in the beam’s path scatter this energy, feeding some of it back to the radar antenna. The radar processes some properties of the returned signal, such as energy, time delay, and frequency shift. These properties, in turn, allow Soli to identify the object’s size, shape, orientation, material, distance, and velocity.

Soli’s spatial resolution has been fine-tuned to pick up most subtle finger gestures, meaning that it doesn’t need large bandwidth and high spatial resolution to work. The radar tracks subtle variation in the received signal over time to decode finger movements and distorting hand shapes.

How to use Soli?

Soli uses Virtual Tools to understand hand/finger gestures and carry out the tasks associated with them. According to Google, Virtual Gestures are hand gestures that mimic familiar interactions with physical tools.

Imagine holding a key between your thumb and index finger. Now, rotate the key as if you were opening a lock. That’s it. Soli, in theory, will pick up the gesture and perform the task associated with it.

So far, Soli recognizes three primary Virtual Gestures.

Button

Imagine an invisible button between your thumb and index fingers, press the button by tapping the fingers together. Primary use is expected to be selecting an application, perform in-app actions.

Dial

Imagine a dial that you turn up or down by rubbing your thumb against the index finger. Primary use is expected to be volume control.

Slider

Finally, think about using a Virtual Slider. Brush your thumb across your index finger to act. Primary use is expected to be controlling horizontal sliders, such as brightness control.

Soli generates feedback by assessing the haptic sensation of fingers touching one another.

What are the applications?

As no mainstream device has implemented Soli so far, it’s hard to guess how it’d perform in the wild. But if all goes according to plan, the radar could become fundamental to smartphones, wearables, IoT components, and even cars in the future.

The radar is super compact — 8mm x 10mm — doesn’t use much energy, has no moving parts, and packs virtually endless potential. Taking all of that into consideration, a contact-less smartphone experience doesn’t seem that far-fetched.

Update the detailed information about Cryptojacking The New Browser Mining Threat You Need To Know About on the Hatcungthantuong.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!