Trending December 2023 # User Defined Data Types In C++ # Suggested January 2024 # Top 20 Popular

You are reading the article User Defined Data Types In C++ updated in December 2023 on the website We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 User Defined Data Types In C++

Introduction to User-Defined Data Types in C++

User Defined Data types in C++ are a type for representing data. The data type will inform the interpreter how the programmer will use the data. A data type can be pre-defined or user-defined. Examples of pre-defined data types are char, int, float, etc.

Start Your Free Software Development Course

Web development, programming languages, Software testing & others

Types of User-Defined Data in C++

The types of user-defined data are as follows:

1. Structure

A structure is a collection of various types of related information under one name. The declaration of structure forms a template, and the variables of structures are known as members. All the members of the structure are generally related. The keyword used for the structure is “struct.”

For example, a structure for student identity having ‘name,’ ‘class,’ ‘roll_number,’ and ‘address’ as a member can be created as follows:

struct stud_id { char name[20]; int class; int roll_number; char address[30]; };

This is called the declaration of the structure, and it is terminated by a semicolon (;). The memory is not allocated, while the structure declaration is delegated when specifying the same. The structure definition creates structure variables and gives storage space for them. One can define the variables of the structure as follows:

stud_id I1, I2;

Where I1 and I2 are the two variables of stud_id. After defining the structure, one can access its members using the dot operator as follows:

I1.roll_number will access the roll number of I1

I2.class will access the class of I2


struct stud_id { int class, roll_number; }; int main() { struct stud_id entries[10];   // Create an array of structures entries[0].class = 4;           // Access array members entries[0].roll_number = 20; cout <<entries[0].class << ", " << entries[0].roll_number; return 0; } 2. Array

An array is a collection of homogeneous data and must be defined before using it for the storage of information. The array can be defined as follows:

int marks[10]

The above statement defined an integer-type array named “marks” that can store the marks of 10 students. After creating an array, one can access any element of an array by writing the name of an array, followed by its index. For example, to access the 5th element from marks, the syntax will be:


It will give the marks stored at the 5th location of an array. An array can be one-dimensional, two-dimensional, or multi-dimensional, depending upon the specification of elements.

int main() { int marks[10]; marks[0] = 5; marks[2] = -10; cout<<marks[0], marks[2]); return 0; } 3. Union

Just like structures, the union also contains members of different data types. The main difference is that the union saves memory because union members share the same storage area. In contrast, members of the structure have their unique storage areas. One declares the unions with the keyword “union,” as shown below:

union employee { int id; double salary; char name[20]; }

One can define the variable of the union as:

union employee E;

To access the members of the union, one can use the dot operator as follows:

E.salary; 4. Class

A class is an essential feature of object-oriented programming languages like C++. A class is a group of objects with the same operations and attributes. To declare a class, use the keyword “class” and follow this syntax:

{ private: Data_members; Member_functions; public: Data_members; Member_functions; };

In this, the names of data members should be different from member functions. There are two access specifiers for classes that define the scope of the members of a class. These are private and public. The member specified as private can only be accessed by the member functions of that particular class. The members, defined as the public, have internal and external access to the class. The members with no specifier are private by default. We refer to the objects that belong to a class as instances of the class. The syntax for creating an object of a class is as follows:


class kids { public:                char name[10];   int age; void print()         { cout<<"name is:"<< name; } } Int main { Kids k;          "Eash"; k.print(); return 0; } 5. Enumeration

Enumeration is specified by using the keyword “enum.” It is a set of named integer constants that define all the possible values a variable of that particular type can have. For example, the enumeration of the week can have names of all the seven days of the week as shown below:

enum week_days{sun, mon, tues, wed, thur, fri, sat}; int main() { enum week_days d; d = mon; cout << d; return 0; } 6. Pointer

A Pointer is a user-defined data type that creates variables for holding the memory address of other variables. If one variable holds the address of another variable, we say that the first variable is the pointer to the other variable. The syntax for the same is:

type *ptr_name;

Here type is any data type of the pointer, and ptr_name is the pointer’s name.


void main() { int a = 10; int *p;   // pointer variable is declared p = &a;  cout<<"Value at p = ",<<p); cout<<"Value at variable a = “,<<a); cout<<"Value at *p = ",<< *p); } 7. Typedef

Using the keyword “typedef,” one can define new data type names to the existing ones. Its syntax is:

typedef float balance;

When we create a new name for the float data type, such as “balance,” we can use it to declare variables of the float type. The use of a typedef can make the code not only easy to read but also to port to a new machine as well.


typedef  int score; int main() { score s1, s2; s1 = 80; cout << " " << b1; return 0; } Conclusion

C++ supports different kinds of user-defined data types, as discussed above. Many other data types exist, such as functions, references, etc. Their use makes programming much easier, and they also help us to club different types of data in a single variable.

Recommended Articles

This is our guide to User-Defined Data Types in C++. Here are some further articles to learn more:

You're reading User Defined Data Types In C++

What Are User Defined Fields, Validations And Controls In Tdl?


In the topic on object manipulation, we have covered the concept of creating and updating Internal objects and persisting the data/information as per the existing structure of the object. When an object needs to be manipulated with a particular data/information, it needs to be reflected against the predefined storage name associated with it.

The storage name is same as the method name available with the object. In real life scenarios, as per business requirements the data storage requirements may not be limited only to the methods already available within the Objects. The Tally user may require additional fields on the screen apart from the ones available in the Default Tally. For example, while entering a Sales Voucher, the dispatch details should store Vehicle details also. In such scenarios, the need to store or persist additional information as a part of existing Internal Object becomes mandatory.

integrity of data especially when additional functionalities are incorporated apart from the ones provided in default.

When additional information needs to be stored within the existing internal objects and persisted into the Tally Database, User Defined Fields(UDF) are created. User Defined Fields have a storage component defined by the user. All the valid datatypes available in TDL are applicable for UDF’s also.  A user defined field can be of data type such as Strings, Amount, Quantity, Rate, Number, Logical and Date. For usage and implementation of UDF’s, the following points need to be taken care of:

The UDF must be defined i.e. a storage component needs to be defined with a specific data type. At this point the storage does not have a correlation with an Internal Object.

The field associated with the UDF needs to be in the context of a Data Object. If the data is to be stored in a sub-object in the existing hierarchy of Internal Object, then the field associated with UDF also needs to be in the same sub-object.

The UDFs are defined under the definition [System: UDF]. The datatype and index number must be specified while creating the UDF.


[System : UDF]


Numbers falling between 1 to 9999 and 20001 to 65536 are opened for customisation, and those between 10000 to 20000 are allotted for common development in TSPL. The user can create 65536 UDFs of each data type.

The index numbers 1 to 29 are already used for Default TDL and are as follows:

1 – 29 of data type String

1 – 3 of data type Date and

1 – 2 of data type Number


[System : UDF]

MyUDF 1 : String : 20003

MyUDF 2 : Date   : 20003

In the example, above the UDF MyUDF1 is defined with a String Datatype and MyUDF2 is defined with a Date Datatype. A UDF does not come into existence until some value is stored into it and is attached with an Internal Object. A UDF value can be stored along with an object already existing in the Tally database or to a new object being created for a specific object type. Once the value is stored, it can be accessed and used from the specified level just like an ordinary method.

UDF/storage component and attach it at the data object level to which the field is associated i.e. the field value is stored in the context of current object.

The attribute Storage is used to store the value entered in the field, in the current object context.




[Field : NewField]

Use     : NameField

Storage : MyUDF

As discussed, a UDF is attached to an Internal Object at a particular level in the existing hierarchy structure. Once it is stored, it can be accessed in the same way as an existing internal method.

In the context of the current object, the value of a UDF can be accessed by prefixing $ to the UDF name.



[Field: NewField]

Use    : NameField

Set As : $MyUDF

Previously, if the TDL or the TCP was lost or corrupted, then there was no way by which we could know the UDF details like the UDF Number, and hence, the retrieval of data related to the UDF was quite difficult.

Now, an XML attribute ‘Index’, within the UDF ‘List Tag’, has been introduced to help retrieve the original UDF number corresponding to the data available within the Objects associated with it. This UDF number will be available in the Index attribute in the UDF List Tag, even when the TDL is not attached or is unavailable.


Here, the UDF number (1010) is displayed under the ‘Index’ attribute in the UDF List Tag.

that, UDFs can be classified as given below:

Simple UDF

Complex/Compound/Aggregate UDF

A simple UDF is used when a single or multiple values of a specific data type needs to be stored along with the Object specified. A UDF storing a single value of a specific data type can be correlated to a method. For example,  $closingbalance and a UDF storing multiple values of the same data type can be correlated to a simple collection. For example name and address collection.

It can store one or more values of a single data type. A UDF used for storage, stores the values in the context of the object associated at Line/Report level, by default. Only one value is stored in this case.

retrieval for the same.

The following example code snippet demonstrates how a UDF can be made use of to store a single value:


[Report : CompanyVehicles]

Object : Company




[Field : CVeh]

Use     : Name Field

Storage : Vehicle

Unique  : Yes

[System : UDF]

Vehicle : String : 700

using $vehicle.

The object is associated at the Report Level. The value stored in a UDF is in the context of Company Object in this case. The UDF Vehicle stores a single string value.

Multiple values can be entered into a field when the line containing it is repeated in the part over the specified UDF. The storage in the field also specifies the name of the UDF. The implementation and usage of this UDF is exactly like a simple collection.




 Let us consider the example below to understand the storage and retrieval for the same. Since the implementation of a Simple UDF storing multiple values is exactly like a Simple Collection, the repeat attribute of Part definition in this case will be as follows: 

[Part : CompVeh]

Line     : CompVeh

Repeat   : CompVeh  : Vehicle

Break On : $$IsEmpty:$Vehicle

Scroll   : Vertical

[Line: CompVeh]

Field : CVeh

[Field: CVeh] Storage: Vehicle

empty value. All the values entered are stored in the UDF Vehicle and are attached to the Company object associated to the report. Thereafter the values stored in the UDF can be retrieved by using $vehicle in the field contained in the line repeated over the UDF Vehicle.

A simple UDF can store single or multiple values of a specific datatype i.e., it contains single or repeated values of the same data type. In real life business scenarios, this does not suffice the data storage requirements. In order to store composite values of discrete datatypes repeating itself once or multiple times, aggregate UDF can be used.

An aggregate UDF can contain multiple Simple UDFs of different datatypes where the Simple UDF can either be Single or Repeat. It can also contain other aggregate UDFs within it and this nesting can continue upto infinity. This can be correlated with compound collections.

Aggregate UDFs are defined in the same way as Simple UDFs inside the System:UDF definition. The data type to be specified here is Aggregate. The UDF defined using the keyword Aggregate is actually the container for the subcomponents defined thereafter. The subcomponents can be a Simple UDF or another aggregate UDF.


[System: UDF]



A company wants to create and store multiple details of company vehicles. The details required are: Vehicle Number, Brand, Year of Mfg., Purchase Cost, Type of Vehicle, Currently in Service, Sold On date and Sold for Amount.

[System : UDF]

Company Vehicles      : Aggregate : 1000

VVehicle Number       : String : 1000

VBrand                : String : 1001

VYear of Mfg          : Number : 1000

VPurchase Cost        : Amount : 1000

VType of Vehicle      : String : 1002

VCurrently in Service : Logical : 1000

VSold On date         : Date : 1000

VSold for             : Amount : 1001

To store the required details, simple UDFs are defined and to store them as one entity , a UDF of type Aggregate is defined, as shown in the example.

Multiple values of discrete data types can be entered in different fields contained in a line. This line will be repeated over the aggregate UDF and the storages in the fields specify the component UDF’s. Aggregate UDF definition does not associate each component field with the aggregate UDF. The association will take place only when the line is repeated over aggregate UDF and the fields within that stores value into the component UDFs. Since the implementation of Aggregate UDF is exactly like a Compound collection, the repeat attribute of Part definition in this case will be as follows:




[Part : Comp Vehicle]

Line     : Comp VehLn

Repeat   : Comp VehLn : Company Vehicles

BreakOn  : $$IsEmpty:$VBrand




[Field : CMP VBrand]

Use     : Short Name Field

Storage : VBrand

Thereafter, the values stored in the individual UDF’s can be retrieved by using $VBrand, $VVehicleNumber and so on in the fields contained in the line repeated over the aggregate UDF Company Vehicles. The Line is repeated over the Aggregate UDF and the Simple UDFs are entered in the fields.

SubForm is an attribute that is used within a Field definition. It relates to a report (not Form) and can be invoked by a field. This attribute is useful to activate a report within a report, perform the necessary action and return to the report used to invoke the Subform. There is no limit on the number of subforms that can be used at the field level.


[Field : Field Name]


A Sub Form is not associated to the Object at the Report level. An Object associated to the Field in which the Sub Form is defined, gets associated to the Sub Form. A Sub Form will inherit the info object from the Field which appears as a pop-up.

The Bill-wise Details is an example of a Sub Form attribute. This screen is displayed as soon as an amount is entered for a ledger whose Bill-wise Details feature has been activated.


The following code snippet uses a Sub Form to enter the details of bills when the Bill Collection ledger is selected, while entering a Voucher. The values entered in the Sub Form are stored in an Aggregate UDF. This UDF is attached to the object to which the field displaying the Sub Form is associated. Here, it is the Object of a Ledger Entries Collection.

The following code is used to associate a Sub Form to the default Field in a voucher.

[#Field : ACLSLed]

Sub Form : BillDetail : ##SVVoucherType = “Receipt” and $LedgerName = “Bill Collection”

The Name Report for the Subform uses an Aggregate UDF to store the data. A Line is repeated over the Aggregate UDF at the Part level.

[Part : BillDetails]

Scroll      : Vertical

Line        : BillDetailsH, BillDetailsD

Repeat      : BillDetailsD : BAggre

Break After : $$Line=2

The attribute Storage is used for all the fields.

[Field: CustName1]

Use     : Name Field

Storage : CustName

The UDF is defined as follows:

[System : UDF]

CustName : String : 1000

BillNo   : String : 1001

BillAmt  : Amount : 1001

EPrint1  : String : 1002

BAggre   : Aggregate : 1000

The data stored in the Repeat UDFs and Aggregate UDFs are analogous to the Objects in the Collection. This data can be displayed as a table. In order to use the data stored in the UDFs as a table a collection needs to be constructed.

Since, the UDF will always be attached to an existing internal object, the type specification will contain reference to the primary object.



[Collection: CMP Vehicles] Title    : “Company Vehicles”

We have seen in previous examples that the Repeat UDF “Vehicle” stores multiple values of the same data type and is associated with the Company Object. The collection CMP Vehicles is constructed by specifying the type as Vehicle of a Company Object.

The Child of specifies the Company Object identifier which is the current company. Once the collection is defined it can be used in the Table attribute of field definition. So when the cursor is in the defined field the values stored in the UDF will be displayed as popup table.

[Field: EI Vehicles Det] Show Table  : Always

As we know a UDF can be stored at any level in the existing Object hierarchy. In those cases, referring to the UDF data and construction of the collection using the referencing method as above is not possible. In those cases the data corresponding to the UDFs can be gathered only by traversing to the desired level in the hierarchy. The Walk attribute of the collection will be used for the same.


Refer to the example used in using Subforms where the aggregate UDF “BAggre” with components BillNo, BillAmt, etc. are attached at the Ledger Entries level. The source collection is constructed using Vouchers of type “Receipt”

[Collection: Src Bills] Child Of : $$VchTypeReceipt

The BillTable collection walks over the Ledger Entries and then over BAggre UDF and then fetches the methods “BillNo” and “BillAmt”. Format is specified for the methods to be displayed in the Table.

[Collection: BillTable]

Source Collection : SrcBills

Walk              : LedgerEntries, BAggre

Fetch             : BillNo, BillAmt

Format            : $BillNo, 10

Format            : $BillAmt, 20

[#Field: VchNarration]

Table             : BillTable

This can be well utilized by the application developers to enforce business requirements. The validation concept can be used for different purposes like

Each business will have unique organizational structure. Naturally this needs to be reflected in the usage of Tally application. For example, the restricting access of Reports to the Data Entry Person or restricting Data Entry person to create Masters.

To assist the Data Entry operator to enter meaningful information. For Example PF Date of Joining should not be less than Date of Joining.

To enforce the integrity constraints. For example Vouchers having manual numbering with ‘prevent duplicates’, duplication of Voucher numbers are not allowed.

Customized reports can be brought under default security control

The following section discusses about developing TDL level validation with the help of definitions, attributes and built-in functions.

The attribute Validate checks if the given condition is satisfied. Unless the given condition is satisfied, the user cannot move further. In other words, if the given condition for Validate is not satisfied, the cursor remains placed on the current field without moving to the subsequent field. It does not display any error message.




[Field: CMP Name]

Use      : Name Field

Validate : NOT $$IsEmpty:$$Value

Storage  : Name

Style    : Large Bold

In the above code snippet,

The field CMP Name is a field in Default TDL which is used to create/ alter a Company.

Validate stops the cursor from moving forward, unless some value is entered in the current field.

The function, IsEmpty returns a logical value as True, only if the parameter passed to it contains NULL.

The function, Value returns the value entered in the current field.

Thus, the attribute Validate used in the current field, controls the user from leaving the field blank and forces a user input.

This attribute takes a logical value. If it is set to Yes, then the values keyed in the field have to be unique. If the entries are duplicated, an error message, Duplicate Entry pops up. This attribute is useful when a Line is repeated over UDF/Collection, in order to avoid a repetition of values.



[!Field: VCHPHYSStockItem]

Table  : Unique Stock Item : $$Line = 1

Table  : Unique Stock Item, EndofList

Unique : Yes

In this code snippet, the field, VCHPHYSStockItem is an optional field in DefTDL which is used in a Physical Stock Voucher. The attribute, Unique avoids the repetition of Stock Item names.

This attribute is similar to the attribute Validate. The only difference is that it flashes a warning message and the cursor moves to the subsequent field. A System Formula is added to display the warning message.




[!Field: VCH NrmlBilledQty]

Notify : NegativeStock : ##VCFGNegativeStock AND @@IsOutwardType AND $$InCreateMode AND +


In this code snippet, VCH NrmlBilledQty is a default optional field in DefTDL used in a Voucher. The Notify attribute pops up as a warning message, if the entered quantity for a stock item is more than the available stock and the cursor moves to the subsequent field.

The attribute Control is similar to Notify. The only difference is that it does not allow the user to proceed further after displaying a message. The cursor does not move to the subsequent field.



[Field: Employee PFDateOfJoining]

Use       : Uni Date Field

Set As    : If $$IsEmpty:$PFAccountNumber AND $$IsEmpty:$FPFAccountNumber Then “” Else If NOT +

Control   : PFJoiningDateBelowJoinDate:If $$IsEmpty:$PFAccountNumber AND $$IsEmpty: + 

Set Always: Yes

In this code snippet, the field, ‘Employee PFDateOfJoining’ is a default field. The control always makes sure that Date of Joining for a Employee is always less than the Date Provident Fund joining Date.

The difference between the field attributes, Validate, Notify and Control are:

Field Attributes Displays Message Curser Movement Validate No Restricted Notify Yes Not Restricted Control Yes Restricted If the condition specified with Control is not satisfied, then the Form displays an error message while trying to save. The Form cannot be saved until the condition in the attribute Control is fulfilled.



[Form: Voucher]

In the example, Voucher is a default Form. While creating a voucher, the attribute, Control does not accept dates beyond the financial period or before beginning of the books.

The attribute, Control restricts the appearance of Menu Items, based on the given condition.



[Menu: Quick Setup]

Key Item  : @@locExciseForManufacturer: M:Display: ExciseMfgr QuickSetUp

Control   : @@locExciseForManufacturer: @@IsIndian AND $$IsInventoryOn

In this code snippet, the Menu, Quick Setup is a default definition. The Menu Item, Excise for Manufacturer, will be displayed only if the selected company is having statutory compliance for India Inventory module enabled.

This is a ‘Report’ level attribute which is required to be specified, in case Multiple Objects of the same collection are being added/modified in a Report. It is required specifically in case of multi master creation or alteration.




[Report: Multi Ledger]

Multi Objects: Ledger Under MGroup

The report level attribute Family is useful when the Security Control is enabled for the company. A Report can be made accessible for only a set of user(s) by setting proper rights at security levels.

For this name of the Report needs to be brought under default Collection ‘Report List’. The value specified with the attribute, Family is automatically added to the security list as a pop-up while assigning the rights under Security Control Menu.




[Report:Balance Sheet]

Family : $$Translate:”Balance Sheet”

In this code snippet, the Balance Sheet is added to the Security list. The users having rights to display Balance sheet can only view the Report.

This built-in function checks the permissions for the currently logged in user. This function can be effectively used to enable or disable an interface based on the permissions for the currently logged in user.




[!Menu: Gateway of Tally]

Key Item: @@locAccountsInfo : A : Menu : Accounts Info. : NOT $$IsEmpty:$$SelectedCmps

Control : @@locAccountsInfo :$$Allow:Create:AccountsMasters OR $$Allow:Alter:AccountsMasters +

          OR $$Allow:Display:AccountsMasters

attribute Family at the report definition.

Data Types In R With Example

In this tutorial, you will learn:

What are the Data Types in R?

Following are the Data Types or Data Structures in R Programming:


Vectors (numerical, character, logical)


Data frames


Basics types

4.5 is a decimal value called numerics.

4 is a natural value called integers. Integers are also numerics.

TRUE or FALSE is a Boolean value called logical binary operators in R.

The value inside ” ” or ‘ ‘ are text (string). They are called characters.

We can check the type of a variable with the class function

Example 1: # Declare variables of different types # Numeric x <- 28 class(x)


## [1] "numeric" Example 2: # String y <- "R is Fantastic" class(y)


## [1] "character" Example 3: # Boolean z <- TRUE class(z)


## [1] "logical" Variables

Variables are one of the basic data types in R that store values and are an important component in R programming, especially for a data scientist. A variable in R data types can store a number, an object, a statistical result, vector, dataset, a model prediction basically anything R outputs. We can use that variable later simply by calling the name of the variable.

To declare variable data structures in R, we need to assign a variable name. The name should not have space. We can use _ to connect to words.

To add a value to the variable in data types in R programming, use <- or =.

Here is the syntax:

# First way to declare a variable: use the `<-` name_of_variable <- value # Second way to declare a variable: use the `=` name_of_variable = value

In the command line, we can write the following codes to see what happens:

Example 1: # Print variable x x <- 42 x


## [1] 42 Example 2: y <- 10 y


## [1] 10 Example 3: # We call x and y and apply a subtraction x-y


## [1] 32 Vectors

A vector is a one-dimensional array. We can create a vector with all the basic R data types we learnt before. The simplest way to build vector data structures in R, is to use the c command.

Example 1: # Numerical vec_num <- c(1, 10, 49) vec_num


## [1] 1 10 49 Example 2: # Character vec_chr <- c("a", "b", "c") vec_chr


## [1] "a" "b" "c" Example 3: # Boolean vec_bool <- c(TRUE, FALSE, TRUE) vec_bool



We can do arithmetic calculations on vector binary operators in R.

Example 4: # Create the vectors vect_1 <- c(1, 3, 5) vect_2 <- c(2, 4, 6) # Take the sum of A_vector and B_vector sum_vect <- vect_1 + vect_2 # Print out total_vector sum_vect


[1] 3 7 11 Example 5:

In R, it is possible to slice a vector. In some occasion, we are interested in only the first five rows of a vector. We can use the [1:5] command to extract the value 1 to 5.

# Slice the first five rows of the vector slice_vector <- c(1,2,3,4,5,6,7,8,9,10) slice_vector[1:5]


## [1] 1 2 3 4 5 Example 6:

The shortest way to create a range of values is to use the: between two numbers. For instance, from the above example, we can write c(1:10) to create a vector of value from one to ten.

# Faster way to create adjacent values c(1:10)


## [1] 1 2 3 4 5 6 7 8 9 10 R Arithmetic Operators

We will first see the basic arithmetic operators in R data types. Following are the arithmetic and boolean operators in R programming which stand for:

Operator Description

+ Addition

– Subtraction

* Multiplication

/ Division

^ or ** Exponentiation

Example 1: # An addition 3 + 4


## [1] 7

You can easily copy and paste the above R code into Rstudio Console. The output is displayed after the character #. For instance, we write the code print(‘Guru99’) the output will be ##[1] Guru99.

The ## means we print output and the number in the square bracket ([1]) is the number of the display

Example 2: # A multiplication 3*5


## [1] 15 Example 3: # A division (5+5)/2


## [1] 5 Example 4: # Exponentiation 2^5


Example 5: ## [1] 32 # Modulo 28%%6


## [1] 4 R Logical Operators

With logical operators, we want to return values inside the vector based on logical conditions. Following is a detailed list of logical operators of data types in R programming

Logical Operators in R

The logical statements in R are wrapped inside the []. We can add as many conditional statements as we like but we need to include them in a parenthesis. We can follow this structure to create a conditional statement:

variable_name[(conditional_statement)] Example 1: # Create a vector from 1 to 10 logical_vector <- c(1:10)



In the example below, we want to extract the values that only meet the condition ‘is strictly superior to five’. For that, we can wrap the condition inside a square bracket precede by the vector containing the values.

# Print value strictly above 5


## [1] 6 7 8 9 10 Example 3: # Print 5 and 6 logical_vector <- c(1:10)


## [1] 5 6

Guide To Simple & Powerful Types Of C# Versions

Introduction to C# Versions

C# is an object-oriented language. It is very simple and powerful. This language is developed by Microsoft. C# first release occurred in the year 2002. Since then below released or versions has come. In this article, we will discuss the different versions.

Start Your Free Software Development Course

Web development, programming languages, Software testing & others

Versions of C# 1. C# Version 1.0

This version is like java. Its lack in the async capabilities and some functionalities. The major features of this release are below

Classes: It is a blueprint that is used to create the objects.

There can be only one public class per file.

Comments can appear at the beginning or end of any line.

If there is a public class in a file, the name of the file must match the name of the public class.

If exists, the package statement must be the first line.

import statements must go between the package statement(if there is one) and the class declaration.

If there are no package or import statements, the class declaration must be the first line in the source code file.

import and package statements apply to all classes within a source code file.

File with no public classes can have a name that need not match any of the class names in the file.


public class Test { public int a, b; public void display() { WriteLine(“Class in C#”); } }

Structure: In Struct, we can store different data types under a single variable. We can use user-defined datatype in structs. We have to use the struct keyword to define this.


using System; namespace ConsoleApplication { public struct Emp { public string Name; public int Age; public int Empno; } class Geeks { static void Main(string[] args) { Person P1; P1.Name = "Ram"; P1.Age = 21; P1.Empno = 80; Console.WriteLine("Data Stored in P1 is " + P1.Name + ", age is " + P1.Age + " and empno is " + P1.empno); } } }


The interface is used as a contract for the class.

All interface methods are implicitly public and abstract.

All interface variables are public static final.

static methods not allowed.

The interface can extend multiple interfaces.

Class can implement multiple interfaces.

Class implementing interface should define all the methods of the interface or it should be declared abstract.

Literals: It is a value used by the variable. This is like a constant value.


class Test { public static void Main(String []args) { int a = 102; int b = 0145 ; int c = 0xFace; Console.WriteLine(a); Console.WriteLine(b); Console.WriteLine(c); } }

Delegates: This is like a pointer. It is a reference type variable which holds the other methods.

2. C# Version 1.2

In this version, some enhancement has been done. They added for each loop in this version which will execute each block until an expression gets false.

3. C# Version 2.0

Generics: Generic programming is a style of computer programming in which algorithms are written in terms of types to-be-specified-later that are then instantiated when needed for specific types provided as parameters.

Anonymous Method: This is a blank method. This is defined using the keyword delegate.

Nullable type: Before this release, we can not define a variable as null. So this release overcomes this.


Covariance and contravariance

Getter/setter separate accessibility: We can use a getter setter for getting and setting the values.

4. C# Version 3.0

This version made C# as a formidable programming language.

Object and collection initializers: With the help of this we can access any field without invoking constructor.

Partial Method: As the name suggests its signature and implementations defined separately.

Var: we can define any variable by using the keyword var.

5. C# Version 4.0

The version introduced some interesting features:

Dynamic Binding: This is like method overriding. Here the compiler does not decide the method which to call.


public class ClassA { public static class superclass { void print() { System.out.println("superclass."); } } public static class subclass extends superclass { @Override void print() { System.out.println("subclass."); } } public static void main(String[] args) { superclass X = new superclass(); superclass Y= new subclass(); X.print(); Y.print(); } }

Named/Optional Arguments

Generic Covariant and Contravariant

Embedded Interop Types

Here the major feature was keyword dynamic. It overrides the compiler at run time.

6. C# Version 5.0

async and await

With these, we easily retrieve information about the context. This is very helpful with long-running operations. In this async enables the keyword await. With the help of await keyword, all the things get asynchronous. So it runs synchronously till the keyword await.

7. C# Version 6.0

This version included below functionalities

Static imports

Expression bodied members

Null propagator

Await in catch/finally blocks

Default values for getter-only properties

Exception filters

Auto-property initializers

String interpolation

name of the operator

Index initializers

8. C# Version 7.0

Out Variables: This variable is basically used when the method has to return multiple values. The keyword out is used to pass to the arguments.

Other important aspects are

Tuples and deconstruction.

Ref locals and returns.

Discards: These are write-only ready variables. Basically this is used to ignore local variables.

Binary Literals and Digit Separators.

Throw expressions

Pattern matching: We can use this on any data type.

Local functions: With the help of this function we can declare the method in the body which is already defined in the method.

Expanded expression-bodied members.

So every version has included new features in the C# which help the developers to solve the complex problems in efficient manner. The next release will be C# 8.0.

Recommended Articles

This is a guide to C# Versions. Here we discuss the basic concept, various types of C# Versions along with examples and code implementation. You can also go through our other suggested articles –

Common Data Capturing Types And Tools


We saw the Data Science spectrum in the previous article, Common terminologies used in Machine Learning and Artificial Intelligence, but what do we need in order to enable each stage? That’s where tools and languages come into the picture.

But before that, we need to understand another aspect that comes prior to the spectrum, before your team starts exploring the data and building models, you should define and build a data engine. You need to ask questions like Where is the data being generated? How big is the data, Which tools are required for collecting and storing it? etc.

Note: If you are more interested in learning concepts in an Audio-Visual format, We have this entire article explained in the video below. If not, you may continue reading.

In this article, we’ll focus on the storage side of things. I want to point out here that you don’t need to memorize the tools that you’re going to see but should be aware of what’s out there to answer the questions we asked earlier. And here is the Data Science Spectrum-

The Three V’s of Big Data

We need to understand the characteristics of the data, and we can divide this into three V’s- Volume, Variety, and Velocity. We’ll understand each of these in a bit more detail and cover some of the commonly used tools for each type as well.

The Three V’s- Volume

Let’s look at the first V, Volume.

Volume refers to the scale and amount of data at hand.

Recall that 90% of the data we see in the world today was generated in the last few years. But we’re decreasing storage and computational costs so collecting and storing huge amounts of data has become far easier. I’m sure all of you must have heard the term Big Data. Well, the volume of data defines if it qualifies as “big data” or not. When we have relatively small amounts of Data Lake, say, 1, 5, or 10 GB, we don’t really need a big data tool to handle this. Traditional tools tend to work well on this amount of data.

When the data size increases significantly to 25 GB or 50 GB, this is the point when you should start considering big data tools.

But when the size of the data exceeds, even at this point, you most definitely do need to implement big data solutions. Traditional tools are not capable of handling 500 GB or 1TB of data, no matter how much we might want them to.

So what are some other tools that can handle these different data sizes? Well, let’s look at them.

Tools for handling data of different sizes-

So Excel easily, the most popular and recognizable tool in the industry for handling small datasets. But the maximum number of rows it supports per sheet is 1 million. And one sheet can only handle up to 16,380 columns that are at a time. This is simply not enough when the amount of data is big.

Access is another Microsoft tool. Popularly used for data storage. Again, smaller databases up to 2 GB can be stored, but beyond that, simply not possible for Microsoft Access.

SQL is a database management system that has been around since the 1970s. It was a primary database solution for quite a few decades. It’s still popular, but other solutions have emerged. SQL’s main drawback is that it’s very difficult to scale as your database continues to grow.

I’m sure you must have heard of Hadoop. It’s an open-source distributed processing framework that manages data processing and storage for big data. You will more than likely come across Hadoop anytime you build a machine learning project from scratch.

Apache hive is a data warehouse built on top of Hadoop. Hive provides a SQL-like interface to query data, storing various databases in file systems that integrate with Hadoop.

The Three V’s- Variety

The second V we have is Variety, which refers to their different types of data. This can include structured and unstructured data. Under the structured data umbrella, we can classify things like tabular data, employee tables, payout tables, loan application tables, and so on and so forth.

As you might’ve gathered, there’s a certain structure to these data types. But when we swing over to unstructured data, we see formats like emails, social media, which includes your Facebook posts, tweets, etc, customer feedback, video feeds, satellite image feeds among other things.

The data stored in these formats do not follow a trend or pattern. It’s huge and diverse and can be quite challenging to deal with.

So what tools are available in the market for handling and storing these different data types? The two most common databases out there are SQL and NO-SQL( Not Only-SQL).

SQL is the market-dominant player for a number of years before NO-SQL emerged. Some examples of SQL databases include MySQL, Oracle SQL, whereas NO-SQL includes popular databases like MongoDB, Cassandra, etc. These NO-SQL databases are seeing huge adoption numbers because of their ability to scale and handle dynamic data, something that SQL struggles with.

The Three V’s- Velocity

The third and final V is Velocity. This is the speed at which data is captured. This includes both real-time and non-real-time capture. But in this article, we’ll focus more on real-time data. This includes Sensor data, which is captured by self-driving cars and CCTV cameras among other things. Self-driving cars need to process data really quickly when they’re on the road. And CCTV cameras of course are popularly used for security purposes and need to capture data points all day long.

Stock Reading is another example of real-time data. Actually, did you know that more than 1TB of trade information is generated during each trade session at the New York stock exchange? That’s the size of real-time data we talking about here, 1TB during each trade session.

Of course, Detecting fraud and Credit card transactions also fall into real-time data processing. And Social media posts and tweets are prime examples for explaining what real-time data looks like. In fact, it takes less than two days for 1 billion tweets to be sent. This is exactly where data storage has become so important in today’s world.

Now let’s look at some of the common tools that captured real-time data for processing.

it’s fault-tolerant 

really quick

and it’s used in production by a lot of organizations

Another one is Apache Storm. It can be used with almost any programming language. A storm can process over 1 million tuples per second and is highly scalable. It’s a good option to consider for high data velocity.

So that was all about the types of data in a few widely used tools associated with them.

End Notes

In this article, we saw some common data capturing types and tools associated with them. We learned about the three V’s of Big Data and also learned about various tools required for handling data with different sizes, different types such as structured or unstructured,d and for real-time data.

If you are looking to kick start your Data Science Journey and want every topic under one roof, your search stops here. Check out Analytics Vidhya’s Certified AI & ML BlackBelt Plus Program


What Is Data Analysis? Research, Types & Example

What is Data Analysis?

Data analysis is defined as a process of cleaning, transforming, and modeling data to discover useful information for business decision-making. The purpose of Data Analysis is to extract useful information from data and taking the decision based upon the data analysis.

A simple example of Data analysis is whenever we take any decision in our day-to-day life is by thinking about what happened last time or what will happen by choosing that particular decision. This is nothing but analyzing our past or future and making decisions based on it. For that, we gather memories of our past or dreams of our future. So that is nothing but data analysis. Now same thing analyst does for business purposes, is called Data Analysis.

In this Data Science Tutorial, you will learn:

Why Data Analysis?

To grow your business even to grow in your life, sometimes all you need to do is Analysis!

If your business is not growing, then you have to look back and acknowledge your mistakes and make a plan again without repeating those mistakes. And even if your business is growing, then you have to look forward to making the business to grow more. All you need to do is analyze your business data and business processes.

Data Analysis Tools

Data Analysis Tools

Data analysis tools make it easier for users to process and manipulate data, analyze the relationships and correlations between data sets, and it also helps to identify patterns and trends for interpretation. Here is a complete list of

Types of Data Analysis: Techniques and Methods

There are several types of Data Analysis techniques that exist based on business and technology. However, the major Data Analysis methods are:

Text Analysis

Statistical Analysis

Diagnostic Analysis

Predictive Analysis

Prescriptive Analysis

Text Analysis

Data analysis tools make it easier for users to process and manipulate data, analyze the relationships and correlations between data sets, and it also helps to identify patterns and trends for interpretation. Here is a complete list of tools used for data analysis in research.

Text Analysis is also referred to as Data Mining. It is one of the methods of data analysis to discover a pattern in large data sets using databases or data mining tools. It used to transform raw data into business information. Business Intelligence tools are present in the market which is used to take strategic business decisions. Overall it offers a way to extract and examine data and deriving patterns and finally interpretation of the data.

Statistical Analysis

Statistical Analysis shows “What happen?” by using past data in the form of dashboards. Statistical Analysis includes collection, Analysis, interpretation, presentation, and modeling of data. It analyses a set of data or a sample of data. There are two categories of this type of Analysis – Descriptive Analysis and Inferential Analysis.

Descriptive Analysis

analyses complete data or a sample of summarized numerical data. It shows mean and deviation for continuous data whereas percentage and frequency for categorical data.

Inferential Analysis

Diagnostic Analysis

Diagnostic Analysis shows “Why did it happen?” by finding the cause from the insight found in Statistical Analysis. This Analysis is useful to identify behavior patterns of data. If a new problem arrives in your business process, then you can look into this Analysis to find similar patterns of that problem. And it may have chances to use similar prescriptions for the new problems.

Predictive Analysis

Predictive Analysis shows “what is likely to happen” by using previous data. The simplest data analysis example is like if last year I bought two dresses based on my savings and if this year my salary is increasing double then I can buy four dresses. But of course it’s not easy like this because you have to think about other circumstances like chances of prices of clothes is increased this year or maybe instead of dresses you want to buy a new bike, or you need to buy a house!

So here, this Analysis makes predictions about future outcomes based on current or past data. Forecasting is just an estimate. Its accuracy is based on how much detailed information you have and how much you dig in it.

Prescriptive Analysis

Prescriptive Analysis combines the insight from all previous Analysis to determine which action to take in a current problem or decision. Most data-driven companies are utilizing Prescriptive Analysis because predictive and descriptive Analysis are not enough to improve data performance. Based on current situations and problems, they analyze the data and make decisions.

Data Analysis Process

The Data Analysis Process is nothing but gathering information by using a proper application or tool which allows you to explore the data and find a pattern in it. Based on that information and data, you can make decisions, or you can get ultimate conclusions.

Data Analysis consists of the following phases:

Data Requirement Gathering

Data Collection

Data Cleaning

Data Analysis

Data Interpretation

Data Visualization

Data Requirement Gathering

First of all, you have to think about why do you want to do this data analysis? All you need to find out the purpose or aim of doing the Analysis of data. You have to decide which type of data analysis you wanted to do! In this phase, you have to decide what to analyze and how to measure it, you have to understand why you are investigating and what measures you have to use to do this Analysis.

Data Collection

After requirement gathering, you will get a clear idea about what things you have to measure and what should be your findings. Now it’s time to collect your data based on requirements. Once you collect your data, remember that the collected data must be processed or organized for Analysis. As you collected data from various sources, you must have to keep a log with a collection date and source of the data.

Data Cleaning

Now whatever data is collected may not be useful or irrelevant to your aim of Analysis, hence it should be cleaned. The data which is collected may contain duplicate records, white spaces or errors. The data should be cleaned and error free. This phase must be done before Analysis because based on data cleaning, your output of Analysis will be closer to your expected outcome.

Data Analysis

Once the data is collected, cleaned, and processed, it is ready for Analysis. As you manipulate data, you may find you have the exact information you need, or you might need to collect more data. During this phase, you can use data analysis tools and software which will help you to understand, interpret, and derive conclusions based on the requirements.

Data Interpretation

Data Visualization

Data visualization is very common in your day to day life; they often appear in the form of charts and graphs. In other words, data shown graphically so that it will be easier for the human brain to understand and process it. Data visualization often used to discover unknown facts and trends. By observing relationships and comparing datasets, you can find a way to find out meaningful information.


Data analysis means a process of cleaning, transforming and modeling data to discover useful information for business decision-making

Types of Data Analysis are Text, Statistical, Diagnostic, Predictive, Prescriptive Analysis

Data Analysis consists of Data Requirement Gathering, Data Collection, Data Cleaning, Data Analysis, Data Interpretation, Data Visualization

Update the detailed information about User Defined Data Types In C++ on the website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!