halderiitp•ac•in
R-406, 4th Floor, Block 3
+91 612 3028009
R-502, 5th Floor, Block 3
+91 612 3028889
FLsim is a versatile federated learning (FL) simulation framework designed to accommodate various FL workflows. It focuses on modularity, scalability, resource efficiency, and reproducibility of experimental outcomes. Its user-friendly interface enables tailored FL setups, including customized data distributions, local learning algorithms, network topologies, model aggregation methods, and optional blockchain integration. Experimental evaluations confirm FLsim's efficacy in simulating diverse FL scenarios. FLsim represents a significant advancement in FL simulation tools, providing researchers and practitioners with unparalleled flexibility and functionality.
SoliFMT is a formal methods toolchain for the Solidity Programming Language. It hosts a number of visualization, analysis, and optimization tools for Solidity smart Contracts.
SemDDA is a database dependency analyzer written in Java. It computes both syntax and semantic-based database-database dependencies in Java Server Page (JSP)-based database-driven web applications. The tool supports various levels of abstractions in the Abstract Interpretation Framework, facilitating one to find a trade-off between precision and efficiency of the dependency analysis.
K-Taint is a rewriting logic-based executable semantics in the K framework for taint analysis of an imperative programming language. K-Taint extends to the case of taint analysis the semantically sound flow-sensitive security type system introduced by Hunt and Sands, considering a support to interprocedural analysis as well. The tool effectively deals with pointers aliasing and a number of constant functions, improving precision of the analysis-results.
Tukra allows the practical evaluation of abstract program slicing algorithms. It exploits the notions of statement relevancy, semantic data dependences and conditional dependences. The combination of these three notions allows TUKRA to refine traditinal syntax-based program dependence graphs, generating more accurate slices. Given a program and an abstract program slicing criterion from end-user as input, TUKRA is able to perform both syntax- and semantics-based intraprocedural slicing of the program w.r.t. the slicing criterion.
In our data collection process, we harness all possible relevant data sources and APIs to collect and compile our dataset. We acquire metadata for parcels using the Decentraland API, forming the characteristics data fragment. Additionally, we seamlessly integrate trading history data from OpenSea, providing a temporal perspective on sold parcels. To extract essential transaction details such as costs, gas prices, and transaction activities, we utilize Google BigQuery and Etherscan to gather Ethereum transactions data. Fig 2 provides an overview of the data collection process, while Fig 3 illustrates the details and relationships between the data fragments.
Weapon detection is the need of today. It plays a crucial role in many applications, such as hostage scenes, surveillance of sensitive areas, anti-terrorist operations, etc. To make the weapon detection model more efficient, we introduce a weapon dataset, named IITP-W, that captures the following properties: a) images depicting real-world scenarios, including complex backgrounds, diverse lighting conditions, object occlusions, and varying image resolutions, b) images having large and small weapons, c) absence of images sharing identical information, and d) exclusion of synthetic images. The dataset includes three types of weapons: (1) Short gun (2) long guns and (3) knife. The IITP-W dataset consists of 4292 instances of short guns, 1047 instances of knifes and 5447 instances of long guns with complex backgrounds, varied sizes, different lightening conditions and different resolutions. The short gun category includes images of real guns belonging to 30 different types in different firing statuses. Similarly, long gun category includes images of real gun belonging 61 different types. Figure 2 depicts images from existing and proposed datasets, highlighting the differences. Additionally, Table 1 furnishes details such as data size, image count with plain backgrounds and synthetic images for both existing and proposed datasets.
Group activity recognition (GAR) in a video is a problem of critical importance, given its broad applications in video analysis, surveillance systems, and the analysis of social behaviour. However, existing GAR datasets do not include crime scenes. While the UCF-Crime dataset is commonly used for crime and anomaly detection in videos, it has several limitations. These include a limited number of videos per class, extremely low-quality footage due to its age, the inclusion of non-human elements in crime scenes, and a lack of appropriate labelling for direct use in existing GAR models. The proposed IITP Hostage dataset is designed to detect hostage scenes based on group activities of hostages and hostage-takers. The dataset includes two categories, hostage and non-hostage, with 923 videos. The proposed dataset features 137 actors, a significant increase compared to existing datasets, which typically include only 20-30 actors. This expanded diversity enhances the dataset's ability to generalize across various real-world scenarios. IITP Hostage was created by staging mock hostage attacks in various scenarios and extracting clips from movie scenes. In contrast, the non-hostage category encompasses a variety of group activities such as walking, talking, and sitting, making the non-hostage group activities more challenging. A sample video is shown on the left-hand side.