Trending December 2023 # Top Versions Of Directx With Explanation # Suggested January 2024 # Top 14 Popular

You are reading the article Top Versions Of Directx With Explanation updated in December 2023 on the website Hatcungthantuong.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Top Versions Of Directx With Explanation

Introduction of DirectX Versions

DirectX versions are an updated collection of new features in DirectX through which it offers flexible working with new techniques, and you can have its different version for working with an improved environment. DirectX is a collection of application programming interface (APIs), which works especially for the handling of game programming. In this article, we learn the versions of DirectX and give you information about the important features of each version. So let us find versions of DirectX and analyze the plus point of each version.

Top Versions of DirectX

Start Your Free Design Course

3D animation, modelling, simulation, game development & others

1. DirectX 9

At the time of launching, it was compatible with Windows 98 and Xp, and its launching time was 2002, but later in 2007, it was improved for having compatibility with Windows 2002 or XP. It was introduced with Shader Model, which has Pixel Shader 2.0 and Vertex Shader 2.0.

2. DirectX 10

It comes with major updates in DirectX API and compatible with Windows Vista and its later versions. Exclusive versions of DirectX 10 can’t run on an older version of Windows, such as Windows XP. Many changes were done in this version, such as DirectInput of the last version changed withXInput, DirectSound with Cross-platform, Audio Creation tool system, as well as additional supports for hardware acceleration audio. One more thing was replaced that was chúng tôi with dplayx.dll.

3. DirectX 11

DirectX 11 comes with major changes and very good features and introduces at Gamefest 08 event in Seattle. It has many features supported with GPGPU, and if we talk about Direct3D 11, it comes with tessellation support. It also has improved multi-threading support, which assists in developing video games and utilizes multi-core processors in a better way, and this multi-threading attracts video game developers. You can run this version on different operating systems of the Windows platform, such as Windows Vista, Windows 7, Windows 8, and Windows 10. It needs Direct3D 11 supporting hardware for Hardware tessellation and Shader Model 5.0. Microsoft also introduces Technical Preview for Direct3D 11. The other hardware and API features of version 10.1 are on hold and only add when it is required to increase the functionality of this version. Later with the Final Update for Windows Vista in 2009, Direct3D 11 has four updates. Let us discuss what that updates was.

DirectX 11.2 was comes in Windows 8.1, and Windows Server 2012 R2 and has new features for Direct2D, such as geometry realizations. Swap Chain composition feature also introduces in it, which offers you rendering of lower resolution scene and then composite that scene at higher resolution through hardware overlay. The third update was DirectX 11.X and was introduced as the next step of DirectX 11.2, which runs on the Xbox One. It has includes draw bundles features which were later introduced with DirectX 12 version. The next update was announced with DirectX 12 at GDC, and the update was DirectX 11.3, which was released in 2023.

4. DirectX 12

The next version of DirectX is DirectX 12 and introduces by Microsoft on March 20, 2014, at GDC and officially released with Windows 10 on July 29, 2023. The first feature which can highlight is Low-level programming APIs for Direct3D 12, and this feature reduces driver overhead. Through parallel computation, resource utilization becomes more efficient, and developers can implement their own command lists and buffers that command with GPU.

I will tell you that developer Max McMullen stated that Direct3D 12 is made to achieve console-level efficiency on PC, Phone, and tablets when Direct3D 12 release comes with some initiatives, which were AMD’s mantle for AMD graphics cards, Khronos Group’s cross-platform Vulkan, and Apple’s Metal for iOS and Mac OS.

DirectX 12 gave 50 to 70 % efficiency in rendering speed and CPU power consumption than DirectX 11 at the computer-generated asteroid field of SIGGRAPH of Intel, released in 2014 by Intel. The first game which was introduced publicly to utilize DirectX 12 was ‘Ashes of Singularity and during testing of it by ArsTechnica in 2023, revealed regressions in the performance of DirectX 12 over DirectX 11 for Nvidia GeForce 980 Ti, whereas if we talk about AMD Radeon R9 290x then it got consistent improvement in its performance which was up to 70 % for DirectX 12.

5. DirectX 12 Ultimate

It is the latest revealed version of DirectX and introduces in March 2023. It is compatible with Windows 10 and Xbox series X as well as with other ninth-generation Xbox consoles. The new features included in this version are Variable Rate shading and Raytracing 1.1, which offers full control over the level of detail of shading. It depends on design choice, Sampler Feedback, and Mesh Shaders.

Conclusion

This was some important points for taking you through the versions of DirectX so that you can have an idea about the features of its version and can analyze them on the basis of their performance. Now you can choose your version according to your capability.

Recommended Articles

This is a guide to DirectX Versions. Here we also discuss the introduction and top versions of directx along with a detailed explanation. you may also have a look at the following articles to learn more –

You're reading Top Versions Of Directx With Explanation

Learn Latest Versions Of Pytorch

Introduction to PyTorch Versions

Web development, programming languages, Software testing & others

Different Versions of PyTorch

Here we discuss the different versions of Pytorch released with the system configuration required and mainly focus on current stable release v1.3 as this is the one used in market and research community currently:

1. Old Version – PyTorch Versions < 1.0.0

In the very first release of PyTorch, Facebook combined Python and Torch libraries to create an open-source framework that can also be operated on CUDA and Nvidia GPU. PyTorch mainly uses Tensors (torch.Tensors) to store and operate on the Multi-Dimensional array. PyTorch released the first version as 0.1.12 in public. 0.4 version was one of the most significant released version with core changes.

In PyTorch v0.4 version has added the support for Windows, added features to support the use of RNN in ONNX (Open Neural Network Exchange). It has C++/Cuda extensions for user’s use. Also in 0.4 version provide support for writing device-agnostic code. Tensors and variables have been merged in the 0.4 release as well as operations can return 0-Dimensional tensors. To install all the old version through conda or mini conda use below commands:

In the below command, the user can replace ‘0.2.0’ with his desired version like ‘0.4.0 or 0.4.1’ And replace cuda9 by cuda8, cuda7.5, etc.

conda install pytorch=0.2.0 cuda90 -c pytorch

PyTorch libraries are also available in GitHub and users can check out the older version of PyTorch and build it. User can replace ‘0.2.0’ with his desired version: git checkout v0.2.0. Users can also download the required libraries for macOS or for Windows. User can download the respective OS libraries from the below URL from the official website of:

2. PyTorch Version 1.0 to 1.2

Before the 1.0 version of the code was written in Pytorch, the Python VM environment was needed to run this app. In 1.0 version python function and classes are provided with chúng tôi and to separate python code, this function/classes can be compiled into high-level representation. The main goal during the release of version from 1.0 to 1.2 was to combine features of Pytorch, ONNX and caffe2 framework into a single framework for seamless integration from research to production deployment. Some of the features added in version 1.0 are as below:

Easy to integrate C++ function with Python.

It separates the AI model from code by providing two modes:

Eager Mode: Mostly used for research as it is simple, debuggable and can use any python library. It needs a Python environment to run.

Script Mode: Model can run without a Python interpreter. This is a production deployment mode it has no python dependency and code is an optimizable subset of Python.

A model can run on servers, GPU or TPUs.

conda install pytorch==1.2.0 torchvision==0.4.0 -c pytorch 3. Latest PyTorch Version

Facebook has released the latest version of PyTorch in 2023. This new version is packed with new changes and bug fixes. Some of the new exciting features are supported for mobile, transparency, named tensors and quantization to meet the needs of researchers. I will be explaining in brief about these new features with some other information.

PyTorch Named Tensors

In prior 1.3 released PyTorch which did not support the suggestion of dimensions, broadcasting based on position or no information related to type was there in documentation with named tensors. PyTorch has overcome this debacle. PyTorch has added Named tensor as a feature so that users can access tensor dimensions using direct names. Previously while performing simple task users had to know the general structure of the now by broadcasting name of the dimensions user can rearrange the dimensions as required.

Named tensors also support error check on the name of the parameter to check dimension name match with the parameter or not.

Example:

import torch data_sample = torch.randn(100, 3, 250, 600 , names=('N', 'C', 'H', 'W'))

Here, N is Number of Batches, C is Number of the channel, H is the height of the image, W is the width of the image.

PyTorch Quantization

To run quantized operations PyTorch uses x86 CPUs with AVX2 support and ARM CPUs.

import torch m = nn.quantized.ReLU() input = torch.randn(2) input = torch.quantize_per_tensor(input, 1.0, 0, dtype=torch.qint32) PyTorch Mobile Support

Quantization is used while developing ML application so that PyTorch models can be deployed to Mobile or Other Devices. In PyTorch 1.3 the developer has added end to end workflow APIs for Android and iOS. This was done to reduce the latency and provide security on the edge node. It is an early-stage developer who is still working on this development with optimized computation, performance, and coverage on mobile CPUs and GPUs.

Apart from the above three features, there are some features added like support for PyTorch on Google colab. Support for tensorboard and performance improvement in the Autograd engine. Some new tools for model privacy, interpretability, and tools to support a multi-modal AI system.

Conclusion

In conclusion, PyTorch is the most used deep learning framework with support to all state of the art technology. As developers are continuously working on improving the PyTorch you can assume that there will be many more releases with exciting new features that will get added. So learning PyTorch to create machine learning or deep learning application will be beneficial for aspiring AI enthusiasts as this is one of the well documented and supported frameworks.

Recommended Articles

We hope that this EDUCBA information on “PyTorch Versions” was beneficial to you. You can view EDUCBA’s recommended articles for more information.

Sentiment Analysis With Lstm And Torchtext With Code And Explanation

In this article, we will see every single details that you need to know for sentiment data analysis using the LSTM network using the torchtext library. We will see, how to use spacy tokenizer in torchtext data class and the use of tabular and bucket iterator. We will use the embedding matrix with or without pre-trained Glove embedding as an input and we will also see how to process text data of different lengths in a batch with pack_padded_sequence. And you can use these techniques in your problem

What are Field and LabelField?

In sentiment data, we have text data and labels (sentiments). The torchtext came up with its text processing data types in NLP. The text data is used with data-type: Field and the data type for the class are LabelField. In the older version PyTorch, you can import these data-types from chúng tôi but in the new version, you will find it in torchtext.legacy.data. You can find detailed information for Field here.

Some important arguments of the data types, that you will use are ‘tokenize’, ‘use_vocab’, ‘batch_first’, ‘include_lengths’, ‘sequential’, and ‘lower’. Let’s first understand the argument tokenize. In simple words, tokenization is a process to split your sentence into words or more basic words. You can use tokenize in many ways either defining your function of a tokenizer, or you can define a function in torch with get_tokenizer, or you can use an inbuilt tokenizer of Field. First, we will install spacy then we will see the tokenizer function.

pip install spacy python -m spacy download en_core_web_sm # Build tokenizer def tokenizer(text): return [token.text for token in spacy_en.tokenizer(text)]

You can also define using torch get_tokenizer as well (another way to define) :

from torchtext.data.utils import get_tokenizer tokenizer = get_tokenizer('spacy', language='en_core_web_sm')

Let’s see the output of any of the tokenizer we defined above. Both are the same.

print(tokenizer("I can't run whole day")) Output: ['I', 'ca', "n't", 'run', 'whole', 'day']

After defining the tokenizer, you can pass it into your Filed. Filed is data-type for your input text. For the article purpose let’s define some sample data in a CSV file.

TEXT = data.Field(tokenize=tokenizer, use_vocab=True, lower=True, batch_first=True, include_lengths=True) LABEL = data.LabelField(dtype=torch.long, batch_first=True, sequential=False) fields = [('text', TEXT), ('label', LABEL)]

In the above data-set and the code: Text input is sequential data and sequential argument is True by default so no need to pass in the first line of code and we pass it in the label field. The include_lengths argument will return the length of each sentence in a batch, we will see this in BucketIterator section of this article in more detail. We can also use tokenizer within the Field without using any tokenizer function we did above (we are not using any of the tokenizer functions we defined above)-

TEXT = data.Field(use_vocab=True, lower=True, tokenize='spacy', tokenizer_language='en_core_web_sm', batch_first=True, include_lengths=True) TabularDataset for the Project training_data = data.TabularDataset( path='sample.csv', format='csv', fields=fields, skip_header=True, ) for example in training_data.examples: print(example.text, example.label) Output: ['she', 'good'] 1 ['he', 'is', 'sad'] 2 ['i', 'am', 'very', 'happy'] 1

We will do the same thing we do always, splitting data into trains and test data as we do with train_test_split of Sklearn. Here TabularDataset has a split function itself, and we will use that function to split our data with a random state:

train_data, val_data = training_data.split(split_ratio=0.7, random_state=random.seed(SEED)) Glove Embedding for Sentiment Analysis LSTM TorchText

Up to this point, we have read our data and converted it into TabularDataset. Now we will see, how to use embedding in this data. I am giving basic informative notes on embedding, which will be helpful for you if you are not aware. Neural Net only deals with numbers. Embedding converts words into integers and there is a vector corresponding to each integer. Refer to the below image, suppose we have 10k words in our dictionary and you have assigned each word a value between1 to 10k.

Create a zero vector of dimension 10k, Now suppose if you want to represent the word “man”, because its value is 1 in the dictionary(refer to the image below), so in the vector put 1 in the first index and keep others to zero. Such types of vectors are one-hot encode vectors and the problem with these vectors is their dimension. If we have 2B words in our dictionary, we have to make a 2B dimension vector.

To overcome such a problem we generate a dense vector and Glove is one such approach that has a dense vector for a word. Here we will download and use pre-trained Glove Embedding in our problem. You can download the Glove vector using the torch and all the dimensional details can be found at this link.

                                      

vectors = Vectors(name='glove.6B.50d.txt') TEXT.build_vocab(train_data, vectors=vectors, max_size=10000, min_freq=1) LABEL.build_vocab(train_data)

In the above code, we initialized the vector and build our training data vocabulary with this vector. I mean, we get a vector for all known tokens from the data set (word/ token). We can restrict the size of vocabulary also. If you do not have the Glove text file, use the following code to download the vector. The cache argument will help you to store the downloaded file for future use. I mean, no need to download the same file again and again.

cache = '.vector_cache' if not os.path.exists(cache): os.mkdir(cache) vectors = Glove(name='840B', dim=50, cache=cache)

When you have built the vocabulary, you can check out the dictionary. Here I have small data so I can print whole tokens here for demonstration purposes.

print(list(TEXT.vocab.stoi.items())) output: [('', 0), ('', 1), ('am', 2), ('good', 3), ('happy', 4), ('he', 5), ('i', 6), ('is', 7), ('sad', 8), ('she', 9), ('very', 10)]

If you have noticed, we have two extra tokens UNK and PAD and the corresponding indices of these two are 0 and 1. If you want to see the vector corresponding to token=’good’, you can do this by the code below.

print(TEXT.vocab.vectors[TEXT.vocab.stoi['good']])

Here TEXT.vocab.vectors contains 50 dimensional vectors for 11 different tokens. chúng tôi converts string to integer(index). The vectors for UNK and PAD are always zero vectors. I am not printing the values as it will take more space here, but you can play around with it. Now I am getting the device type I have because it is going to be used in Bucket-Iterator.

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') BucketIterator for Sentiment Analysis LSTM TorchText

Before the code part of BucketIterator, let’s understand the need for it. This iterator rearranges our data so that similar lengths of sequences fall in one batch with descending order to sequence length (seq_len=Number of tokens in a sentence). If we have the text of length=[4,6,8,5] and we want to split this data into two batches the BucketIterator will split it into [8,6] and [5,4].

                                                               Figure 3: BucketIterator for one batch

Arranging data in descending order is required for efficient calculations. Should we replace the question mark with PAD tokens? You will get the answer in this article. BucketIterator helps to keep a similar length of sentences in one batch. This will reduce the padding tokens overhead for computational points of view, first see how to code the BucketIterator:

BTACH_SZIE = 2 train_itr, val_itr = BucketIterator.splits( (train_data, val_data), batch_size=BATCH_SIZE, sort_key=lambda x:len(x.text), device=device, shuffle=True, sort_within_batch=True, sort=False )

I hope every argument is self-explanatory here, we passed the batch size of 2. Choose batch size wisely as it is a crucial hyper-parameter and its value also depends on how much data you can process in your GPU/CPU memory. We did not sort the entire data-set but we did sort the data samples within a batch(sort_within_batch=True). See how our batches look:

for batch_no, batch in enumerate(train_itr): text, batch_len = batch.text print(text, batch_len) print(batch.label) output: (tensor([[ 6, 2, 10, 4], [ 5, 7, 8, 1]]), tensor([4, 3])) tensor([0, 1])

Each batch contains the token ids and labels, here we got the length of each sentence in a batch as well because we passed include_length as true in the TEXT Field. If you have more sentences of different lengths, you will see BucketIterator arrange the data very nicely.

Basics of LSTM Model

Long short-term memory (LSTM) is a family member of RNN. RNN learns the sequential relationship and this is the reason RNN works well in NLP because the next token has some information from the previous tokens. LSTM can learn longer sequences compare to RNN or GRU. Example: “I am not going to say sorry, and this is not my fault.”

Here the same person who does not want to say sorry is also confident of not being guilty. To understand such logic the network has to be capable of learning the relationship between the first word to the last word of a sentence if necessary. For longer sentences, the network has to understand the relevant relationship between all words and the order of the sequence (which token is coming next in the sentence).

The LSTM plays a very good role here and remembers longer dependency in the sequence due to its capability of remembering relevant information and forgetting irreverent information in a sequence. You can explore this article for more details, you will get all the RNN basics.

Input Shape and Hidden

The input can be given in two ways: 1. (Sequence First: Sequence Length, Batch Size, Input Dimension) 2. (Batch First: Batch Size, Sequence Length, Input Dimension). We will use the second format of the input here. We already have defined the batch size in the BucketIterator, the sequence_length is the number of tokens in a batch and the input dimension is the Glove vector dimension which is 50 in our case.

The hidden shape is (No of Direction * Number of Layers, Batch Size, Hidden Size). Sentiment text information can be extracted using Bi-directional LSTM so the number of directions is 2, we will use 2 number of LSTM layers so its value is 2 in our case. The batch size we already discussed and hidden size you can choose suitable value 8, 16, 32, 64, etc.

                                                   Figure 4: Input shape for LSTM(RNN)

Model class SentimentClassifier(nn.Module): def __init__(self, vocab_size, embed_dim, hidden, n_label, n_layers): super(SentimentClassifier, self).__init__() self.hidden = hidden self.n_layers = n_layers self.embed = nn.Embedding(vocab_size, embed_dim) chúng tôi = nn.LSTM(embed_dim, hidden, num_layers=n_layers, bidirectional=True, batch_first=True)#dropout=0.2 chúng tôi = nn.Linear(hidden * 2, n_label) def forward(self, input, actual_batch_len): embed_out = self.embed(input) hidden = torch.zeros(self.n_layers * 2 , input.shape[0], self.hidden) cell = torch.zeros( self.n_layers * 2, input.shape[0], self.hidden) pack_out = nn.utils.rnn.pack_padded_sequence( embed_out, actual_batch_len,batch_first=True).to(device) out_lstm, (hidden, cell) = self.lstm(pack_out, (hidden, cell))#dropout hidden = torch.cat((hidden[-2,:,:], hidden[-1,:,:]),dim=1) out = self.fc(hidden) return out VOCAB_SIZE = len(TEXT.vocab) EMBEDDING_DIM = TEXT.vocab.vectors.shape[1] HIDDEN= 64 NUM_LABEL = 4 # number of classes NUM_LAYERS = 2 model = SentimentClassifier(VOCAB_SIZE, EMBEDDING_DIM, HIDDEN, NUM_LABEL, NUM_LAYERS)

This is our model, do not worry we will break this code step by step. VOCAB_SIZE: Total tokens in data set, EMBEDDING_DIM: Glove vector dimension (50 here), HIDDEN we took 64, NUM_LABEL is our number of classes and NUM_LAYERS is 2: 2 stacked LSTM layer. First, we defined the embedding layer which is a mapping of the vocabulary size to a dense vector, this is the reason, we have mapped total vocab size to the vector dimension. See an example for torch embedding where we have only 2 tokens in the vocab and we want it to transform into a 4-dimensional vector:

emb = nn.Embedding(2,4)# size of vocab = 2, vector len = 4 print(emb.weight) output: tensor([[ 0.2626, -0.7775, -0.7230, 0.6391], [-0.7772, 0.4914, -0.9622, 1.2316]], requires_grad=True)

In the above code, the first and second output list is a 4-dimensional embedding vector for emb(0)[token 1] and emb(1)[token[2] respectively. The second thing we defined in the classifier is the LSTM layer, we did a mapping of the vector (Embedding dimension) to the hidden. You can also pass dropout in LSTM for regularization. At last, we defined a fully connected layer which resulted out in our desired number of classes and the input for this linear transformation is two times the hidden. Why have two times hidden? Because this is bidirectional LSTM and we are concatenating the final hidden cells from the forward and backward direction of the last layer of LSTM (As we have bidirectional LSTM layers).

Time to discuss what we did in the forward method of SentimentClassifier class. We are passing two-argument input (batched data) and the number of tokens in each sequence of the batch. Very first we passed input to embedding layers we created but wait….. This embedding does not aware of the Glove embedding, we just downloaded before. If you do not want to use any pretrained embedding just go ahead (parameters learning from scratch for the embedding) else do the following code to copy existing vectors for each token we have.

model.embed.weight.data.copy_(TEXT.vocab.vectors) print(model.embed.weight) Output: tensor([[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [-0.2660, 0.4732, 0.3187, ..., -0.1116, -0.2955, -0.2576], ..., [ 0.1777, 0.1764, 0.0684, ..., 0.1164, -0.0368, 0.1446], [ 0.4121, 0.0792, -0.4929, ..., 0.0564, 0.1322, -0.5023], [ 0.5183, 0.0194, 0.0089, ..., 0.2638, -0.0442, -0.3650]])

The first two vectors are zero vectors as they represent the UNK and PAD tokens(as we have seen in the glove embedding section). Copying the pre-trained embedding will help our model to converge much-mush faster as the tokens are already well-positioned in some hyper-dimensional space. So do not forget to copy existing vectors from the pre-trained embedding.

The hidden and cell need to be reset for the first token of every new sentence in LSTM and this is the reason we initialized it to zero before pass it to the LSTM. If we do not set the hidden and cell to zero Torch does it, so it is optional here. We used pack_padded_sequence and the question is why? As you remember we saw question marks in figure 3 for empty tokens, just go up if you missed them.

pack_padded_sequence

Then we used pack_padded_sequence on the embedding output. As BucketIterator grouped the similar length sequences in one batch with descending order of sequence length, and this is essential for pack_padded_sequence. The pack_padded_sequence returns you new batches from the existing batch. I will give you all the basics through code:

                                   Figure 5: Batch creation pack_padded_sequence

data: tensor([[ 6, 2, 10, 4], [ 9, 3, 1, 1]]) # 1 is padded token len: tensor([4, 2])

Let’s have a batch of two sentences (1) “I am very happy” (2) “She good”. The token_ids are written above with length [4,2] The pack_padded_sequence converts the data into batches of [2, 2, 1, 1] as shown in figure 5. Let us understand this with a small example with code for that we are passing the embedding output to pack_padded_sequence with a list of seq_len we have [4, 2].

for batch in train_itr: text, len = batch.text emb = nn.Embedding(vocab_size, EMB_DIM) emb.weight.data.copy_(TEXT.vocab.vectors) emb_out = emb(text) pack_out = nn.utils.rnn.pack_padded_sequence(emb_out, len, batch_first=True) rnn = nn.RNN(EMB_DIM, 4, batch_first=True) out, hidden = rnn(pack_out)

If we print the hidden here we will get:

Hidden Output: [[[ 0.9451, -0.9984, -0.4613, 0.9768], [ 0.9672, -0.9905, -0.1192, 0.9983]]]

If we print the complete output we will get:

rnn_output: [[ 0.9092, -0.9358, -0.8513, 0.9401], [ 0.8691, -0.9776, 0.5006, 0.1485], [ 0.8109, -0.9987, 0.9487, 0.9641], [ 0.9672, -0.9905, -0.1192, 0.9983], [ 0.9926, -0.9055, -0.5543, 0.9884], [ 0.9451, -0.9984, -0.4613, 0.9768]]

Refer to figure 5 for this explanation (focus on purple lined tokens). The hidden of the last token will explain the sentiment for the sentence. Here is the first hidden output, that is corresponding to the last token (“happy”) of the first sequence and in rnn_output list it is the last one. The second last(5th) rnn_output is (“good”) of no use here. But the last hidden output belongs to the last token of the second sequence(“good”) and it is the 4th rnn_output. If our sequence length and data set will grow, we can save a lot of computations with pack_padded_sequence. You can transform the output to its original form of sequences by printing the following lines and I leave this part for you to analyze.

print(nn.utils.rnn.pad_packed_sequence(out, batch_first=True))

Now we have completed all the required things we need to know, we have data in our hands, we have made our model ready and we copied Glove embedding to our model’s embedding. So at last we will define some hyper-parameters then we will start training data.

Calculate Loss

opt = torch.optim.Adam(model.parameters(), lr=0.001) criterion = nn.CrossEntropyLoss() model.to(device)

We have defined CrossEntropyLoss (multi-class) as a loss function as we have 4 numbers of the output class and we used Adam as the optimizer. If you remember we passed the data to the device in BucketIterator so if you have Cuda then call model.to() method because data and model are to be in the same memory, either CPU or GPU. Now we will define functions to calculate the loss and accuracy of our model.

def accuracy(preds, y): _, preds = torch.max(preds, dim= 1) acc = torch.sum(preds == y) / len(y) return acc def calculateLoss(model, batch, criterion): text, text_len = batch.text preds = model(text, text_len.to('cpu') ) loss = criterion(preds, batch.label) acc = accuracy(preds, batch.label) return loss, len(batch.label), acc

The accuracy function consists of simply Torch operations: matching our predictions with actuals. In calculateLoss we passed input to our model, the only thing to note here we shifted the batch_sequence_lengths (text_len in above code) to the CPU before.

Epoch Loop

N_EPOCH = 100 for i in range(N_EPOCH): model.train() train_len, train_acc, train_loss = 0, [], [] for batch_no, batch in enumerate(train_itr): opt.zero_grad() loss, blen, acc = calculateLoss( model, batch, criterion) train_loss.append(loss * blen) train_acc.append(acc * blen) train_len = train_len + blen loss.backward() opt.step() train_epoch_loss = np.sum(train_loss) / train_len train_epoch_acc = chúng tôi train_acc ) / train_len model.eval() with torch.no_grad(): for batch in val_itr: val_results = [calculateLoss( model, batch, criterion) for batch in val_itr] loss, batch_len, acc = zip(*val_results) epoch_loss = np.sum(np.multiply(loss, batch_len)) / np.sum(batch_len) epoch_acc = np.sum(np.multiply(acc , batch_len)) / np.sum(batch_len) print('epoch:{}/{} epoch_train_loss:{:.4f},epoch_train_acc:{:.4f}' ' epoch_val_loss:{:.4f},epoch_val_acc:{:.4f}'.format(i+1, N_EPOCH, train_epoch_loss.item(), train_epoch_acc.item(), epoch_loss.item(), epoch_acc.item()))

If you are new to Torch: we use three important functionality (1) zero_grad to set all gradients to zero (2) loss.backward() to computes the gradients (3) opt.step() to update the parameters. All these three are only for training data so we set torch.no_grad() during evaluation phase.

Conclusion

Wow, we have completed this article, and it’s time for you to hands-on your data set. In my experience in many real-world applications, we are using sentiment analysis heavily in the industry. I hope this article helps your understanding much better than before. See you next time with some other interesting NLP article.

All the images used in this article are designed by the author.

The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.

Guide To Simple & Powerful Types Of C# Versions

Introduction to C# Versions

C# is an object-oriented language. It is very simple and powerful. This language is developed by Microsoft. C# first release occurred in the year 2002. Since then below released or versions has come. In this article, we will discuss the different versions.

Start Your Free Software Development Course

Web development, programming languages, Software testing & others

Versions of C# 1. C# Version 1.0

This version is like java. Its lack in the async capabilities and some functionalities. The major features of this release are below

Classes: It is a blueprint that is used to create the objects.

There can be only one public class per file.

Comments can appear at the beginning or end of any line.

If there is a public class in a file, the name of the file must match the name of the public class.

If exists, the package statement must be the first line.

import statements must go between the package statement(if there is one) and the class declaration.

If there are no package or import statements, the class declaration must be the first line in the source code file.

import and package statements apply to all classes within a source code file.

File with no public classes can have a name that need not match any of the class names in the file.

Code:

public class Test { public int a, b; public void display() { WriteLine(“Class in C#”); } }

Structure: In Struct, we can store different data types under a single variable. We can use user-defined datatype in structs. We have to use the struct keyword to define this.

Code:

using System; namespace ConsoleApplication { public struct Emp { public string Name; public int Age; public int Empno; } class Geeks { static void Main(string[] args) { Person P1; P1.Name = "Ram"; P1.Age = 21; P1.Empno = 80; Console.WriteLine("Data Stored in P1 is " + P1.Name + ", age is " + P1.Age + " and empno is " + P1.empno); } } }

Interfaces:

The interface is used as a contract for the class.

All interface methods are implicitly public and abstract.

All interface variables are public static final.

static methods not allowed.

The interface can extend multiple interfaces.

Class can implement multiple interfaces.

Class implementing interface should define all the methods of the interface or it should be declared abstract.

Literals: It is a value used by the variable. This is like a constant value.

Code:

class Test { public static void Main(String []args) { int a = 102; int b = 0145 ; int c = 0xFace; Console.WriteLine(a); Console.WriteLine(b); Console.WriteLine(c); } }

Delegates: This is like a pointer. It is a reference type variable which holds the other methods.

2. C# Version 1.2

In this version, some enhancement has been done. They added for each loop in this version which will execute each block until an expression gets false.

3. C# Version 2.0

Generics: Generic programming is a style of computer programming in which algorithms are written in terms of types to-be-specified-later that are then instantiated when needed for specific types provided as parameters.

Anonymous Method: This is a blank method. This is defined using the keyword delegate.

Nullable type: Before this release, we can not define a variable as null. So this release overcomes this.

Iterators

Covariance and contravariance

Getter/setter separate accessibility: We can use a getter setter for getting and setting the values.

4. C# Version 3.0

This version made C# as a formidable programming language.

Object and collection initializers: With the help of this we can access any field without invoking constructor.

Partial Method: As the name suggests its signature and implementations defined separately.

Var: we can define any variable by using the keyword var.

5. C# Version 4.0

The version introduced some interesting features:

Dynamic Binding: This is like method overriding. Here the compiler does not decide the method which to call.

Code:

public class ClassA { public static class superclass { void print() { System.out.println("superclass."); } } public static class subclass extends superclass { @Override void print() { System.out.println("subclass."); } } public static void main(String[] args) { superclass X = new superclass(); superclass Y= new subclass(); X.print(); Y.print(); } }

Named/Optional Arguments

Generic Covariant and Contravariant

Embedded Interop Types

Here the major feature was keyword dynamic. It overrides the compiler at run time.

6. C# Version 5.0

async and await

With these, we easily retrieve information about the context. This is very helpful with long-running operations. In this async enables the keyword await. With the help of await keyword, all the things get asynchronous. So it runs synchronously till the keyword await.

7. C# Version 6.0

This version included below functionalities

Static imports

Expression bodied members

Null propagator

Await in catch/finally blocks

Default values for getter-only properties

Exception filters

Auto-property initializers

String interpolation

name of the operator

Index initializers

8. C# Version 7.0

Out Variables: This variable is basically used when the method has to return multiple values. The keyword out is used to pass to the arguments.

Other important aspects are

Tuples and deconstruction.

Ref locals and returns.

Discards: These are write-only ready variables. Basically this is used to ignore local variables.

Binary Literals and Digit Separators.

Throw expressions

Pattern matching: We can use this on any data type.

Local functions: With the help of this function we can declare the method in the body which is already defined in the method.

Expanded expression-bodied members.

So every version has included new features in the C# which help the developers to solve the complex problems in efficient manner. The next release will be C# 8.0.

Recommended Articles

This is a guide to C# Versions. Here we discuss the basic concept, various types of C# Versions along with examples and code implementation. You can also go through our other suggested articles –

Top 3 Use Cases Of Ai Api With Examples In 2023

Almost 70% of companies have reported an increase in their revenue after the adaptation of AI in 2023. Additionally, companies that use API have seen their share price increase by more than 12% relative to companies that don’t. Combining AI and API can unlock new potential for companies by simplifying the usage of AI models. 

However, executives might have a lack of knowledge regarding the use cases of AI API. Therefore, in this article, we will cover the top 3 use cases of AI APIs and provide examples of APIs that enable AI integration. Additionally, we will cover the impact that AI has on API testing. 

1- Text & speech analysis

Natural language processing (NLP) is an AI-based technology that allows computers and machines to comprehend human speech and text. NLP API will enable developers to utilize the existing NLP models and platforms without creating an NLP. This is a significant benefit as developing NLP models is cumbersome and expensive as they require a substantial amount of data gathering & labeling. 

General NLP benefits include: 

If you are interested in use cases of NLP, read the Comprehensive Guide to Top 30 NLP Use Cases & Applications.

An example of AI API that use NLP is: 

Google cloud natural language API

This API enables developers to apply natural language understanding (NLU) to their apps. It includes features such as :

Sentiment analysis,

Entity analysis,

Content classification,

Syntax analysis.

In Figure 1, you can see an example of this API in use:

Figure 1. Google cloud natural language API in use

2- Computer vision

Computer vision is an AI-enabled technology in which images and videos can be analyzed to retrieve meaningful information from them. Actions or recommendations can be made based on the results. Computer vision APIs enable using existing computer vision models instead of developing a computer vision model yourself, as developing computer vision models is expensive and time-consuming. 

Computer vision has a variety of use cases in different industries such as:

Detecting tumors and cancers in the healthcare industry.

Inventory management in the retail industry.

Crop monitoring in agriculture .

Autonomous driving in the automotive industry. 

An example of computer vision API is : 

Microsoft Azure computer vision APIs

Developers can retrieve information from the images that they specify which will be analyzed by the Microsoft image processing algorithm. Some of the information that can be retrieved from their APIs are:

Visible brands in the image,

Description of the image,

Faces in the image and their sex and age,

Landmark location,

Celebrity detection. 

3- Machine learning 

Machine learning (ML) is a part of AI & computer science that focuses on using data & algorithms to simulate how humans learn so it can gradually increase the accuracy of the system. 

APIs can be used to access ML models in order to implement them in analysis or applications. They provide a set of functions and tools that can be used in ML development. Machine learning has many use cases, such as :

Algorithmic trading in the finance industry. 

Genomic analysis in the healthcare industry .

Predictive analytics in the manufacturing industry. 

IBM Watson machine learning API

Using IBM Watson machine learning API, models can be :

Trained.

Stored.

Scored.

Deployed. 

Integrated. 

IBM Watson’s machine learning platform provides tools that can fully automate your training processes. 

We have provided you with a data-driven list of more than 300 AI APIs that you can access here. 

Effect of AI on API testing

API testing is an important aspect of API development as it increases the chances of API functioning as desired. However, API testing is a time-consuming task if done manually. AI-enabled testing can automate API testing which in turn can:

Reduce the cost of testing

Reduce the time of testing

Increase testing coverage

Sponsored

PULSE is an automated AI-enabled tool for API testing, provided by Testifi. PULSE can decrease the cost & effort of testing by 50%. Many reputable companies such as Amazon and BMW use Tesifi’s services. 

You can check our list of top API testing tool providers here.

If you have questions about how to use AI API do not hesitate to contact us: 

Cem regularly speaks at international technology conferences. He graduated from Bogazici University as a computer engineer and holds an MBA from Columbia Business School.

YOUR EMAIL ADDRESS WILL NOT BE PUBLISHED. REQUIRED FIELDS ARE MARKED

*

0 Comments

Comment

Newly Introduced Elements Exclusively In Html5 Versions

Introduction to HTML5 Tags

We all know the standard abbreviation of HTML, which is HyperText Markup Language. So, HTML5 is the latest and the new version of HTML. Once a product is developed; obviously, there would be many versions of HTML with many new developments along the way. So, HTML5 has new attributes and behaviors. This HTML5 tag is not a programming language anymore, but it is a mark-up language. Now, what is a mark-up language? Defining elements attributes using tags in a document is a Mark-up language. Now, let’s get into detail about how we can define tags and create a web page.

Start Your Free Software Development Course

Web development, programming languages, Software testing & others

Tags of HTML5

In HTML5, we can generally divide the tags into two categories.

The tags discussed below are those which are newly introduced exclusively in HTML5 versions. They are different types of tags that all can be categorized.

1. Structural Tags

Below are the types of structural tag with examples:

a. Article: This is a tag that is mostly used similar to a head tag. Majorly used in forms, blogs, news story and all as examples.

Code:

Output:

b. Aside: Something similar to our normal tags, which would relate the content to the surrounding contents, like a sidebar in the article. And this tag would only make sense when using an IE version above 8.

c. Details: This tag is used to provide some additional data to the user. This can be an interactive platform that can hide or show the details. We can get to see the usage of this tab under the summary tag.

d. Header: This tag is related to the header part and contains title information. It has to have both the start and end tags.

Output:

e. hgroup: This tag is used in describing a group of headers. Let’s look at the example.

Code:

Output:

f. Footer: This tag is that, which is to be placed at the end of the page. It deals with something like copyright, history-related information or data. Let us see a small example below.

Code:

Output:

g. nav: This tag is for providing a section of all the links for navigation.

Code:

Output:

h. Section: As the name already denotes, this tag defines the part of the code like the body, header, footer, etc. Here, both the start and end tags are necessary. Let us see a small example below:

Code:

Output:

i. Summary: This tag is used in parallel with the details tab. Under the details tag, we have this summary tag to summarize the concepts. Example below:

Code:

Output:

Now expanding the summary tag data, we get the below.

2. Form Tags

Here are the different types of form tag explained below with examples:

a. Datalist: This tag is used like a drop-down that has pre-defined values for a user to choose. Let us have a look at the small example below:

Code:

Output:

The drop-down pops up when the mouse has hovered over.

b. Keygen: This is for the encryption. It is for generating an encrypted key for passing the data in an encrypted format. Only the start tag is enough/required for this element, and the end tag is not mandatory.

c. Meter: This tag would give us the measurement of the data which is present in a given range.

Code:

Output:

3. Formatting Tags

a. BDI: This is Bi-directional isolation. As the name already suggests, this tag can be used to isolate a part of the text and give it different styles from that of other text.

b. Mark: This tag can help us highlight a specific text.

Code:

Output:

c. Output: As the name already shows us, it gives the result of any calculation.

Code:

Output:

Make sure that you notice the form attribute of oninput. Once you input the attribute ‘x’ value, then the output gets displayed.

d. Progress: This tag gives us the progress of a particular task.

Code:

Output:

e. Rp: This is used when the ruby tags are not supported.

f. Rt: It is used with the tag ruby. Mostly this is used in pronunciation in both Japanese and Chinese languages.

g. Ruby: This tag is used with the rt and rp tags where the annotations with respect to the two languages, Chinese and Japanese, are pronounced.

h. Wbr: This tag is for the word break. It is mainly used to check how a word breaks when the window size is resized.

4. Embedded Content Tags

Here are the types of embedded content tag explained below with examples:

a. Audio: As the name already suggests, this tag would help us to incorporate audio files in the HTML document.

b. Canvas: Defines a place on the web page where graphics or shapes, or graphs are present or can be defined. Here is an example.

Code:

window.onload = function(){ var can = document.getElementById(“run”); var context = can.getContext(“2d”); context.moveTo(30, 60); context.lineTo(150, 30); context.stroke(); };

Output:

c. Dialog: This tag gives us a default box, especially if we wanted to have data in a box.

Code:

d. Embed: This tag can be used for getting in any external file to the HTML file. We can have only the start tag, and the end tag is not mandatory here. There are different attributes that can be used with this tag, namely, width, height, src, and type.

e. Figure and Figcaption: This, as already in its name, can incorporate the images and can give a caption to that image.

f. Source: This tag can implement multiple audio and video files by providing the location of the files using this source tag.

g. Time: This tag, as the name already notifies, is a tag for displaying the time. And note that this tag is not functional in the cases of internet explorer version 8 and below.

h. Video: With the name of the tag, we can obviously get to know where this tag is used. For specifying the video files, we have this tag. Inside this Audio/Video tags, we define the source tags in specifying the files and their locations.

Input Elements of HTML5 Tags

Here are some input elements which we are using in HTML5 tags:

1. Email: This is one of the input elements in HTML5. This element takes in only email addresses as the input.

2. Number: This input element only accepts the number.

3. Range: As the name already explains, this tag contains a range of numbers.

4. URL: This input tag accepts the input field for the URL address. In this input type, we can only enter the URL.

5. Placeholder: This is one of the attributes for the input type as text or text area or any number. This place holder value shows the value to be given as the input.

Code:

Enter Date of birth : <input type = “text” name = “dob”

Output:

6. Autofocus: This attribute automatically focuses on a particular field where this element has been declared inside the input tag. This attribute is supported only by the latest versions of Chrome, Safari, and Mozilla only. The syntax is like this:

Output:

In this HTML5, we even have an opportunity to get the GeoLocation of a device. There are different methods that can be helpful in making this location tagging easy. There are also different fonts and colors available in HTML5. Below are the few tags that are removed from the HTML usage from this HTML5 version.

Acronym, Applet, big, dir, font, frameset, center, tt (TeleType text), basefont, center, strike, frame, u (underlined text), isindex, noframes, etc. Few attributes that are removed are below:

Align, bgcolor, cellpadding, cellspacing, border, link, shape, charset, archive, codebase, scope, alink, vlink, link, background, border, clear, scrolling, size, width, etc.

Conclusion

So, yes, there are the basic tags and references for HTML5. The initial version of HTML5 was released on 28th October 2014. We have seen different new tags that were introduced and had gone through a few attributes in HTML5. In the end, we had even covered that not only the introduction of new elements was done, but some elements and attributes that were present were restricted from use through this new release of HTML5.

There were many attributes that were given with examples and some with only the data and the purpose of the attribute or elements. Try practicing all those different elements and attributes and keep learning.

Recommended Articles

This is a guide to HTML5 Tags. Here we discuss the top 4 HTML5 tags and their input elements in detail, along with examples and code implementation. You may also look at the following articles to learn more-

Update the detailed information about Top Versions Of Directx With Explanation on the Hatcungthantuong.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!