Blog Caspex

Crack Your Caspex Interview in 2024: 8 Detailed Questions & Explanations for Senior Java Developer

Before we delve into the detailed questions and explanations for the second round of the Caspex interview, it’s important to understand what the first round entails. If you haven’t already, I highly recommend checking out our comprehensive guide on the

First Round of the Caspex interview:

https://codetechsummit.com/interview-at-caspex-bangalore/

This will give you a solid foundation and prepare you for the subsequent rounds.

Now, let’s talk a bit about Caspex. Caspex is a global consulting and IT services company1. They are known for providing end-to-end IT services that align with the changing needs of their worldwide partners across various industries2. Caspex is committed to delivering innovative and flexible solutions with lasting value.

They have a customer-centric approach that enables the forming of strong partnerships, ensuring partners grow while nurturing relationships over time3. With over 15 years of service, Caspex has successfully managed numerous service projects and has a wide network of consultants worldwide3. Their expertise spans across various technologies including Java, Microservices, Microsoft .NET, React Native, Flutter, Microsoft Azure/AWS/GCP Microservice, React Js, and Big Data Platform Engineering3.

With this understanding of Caspex and its operations, you can better tailor your responses during the interview to align with the company’s values and objectives. Now, let’s move on to the second round of the Caspex interview…

Question: Can you share the concept of Spring security and how we can implement spin security in our Spring boot applications?

Spring Security is a powerful and highly customizable authentication and access-control framework for Java applications. It is particularly useful for securing web applications, RESTful web services, and other types of applications built with the Spring framework.

Here’s an overview of the Spring Security framework and how you can implement it in your Spring Boot applications:

1. Core Concepts:

  • Authentication: The process of verifying the identity of a user or a system. Spring Security provides various authentication mechanisms, including form-based authentication, Basic authentication, and OAuth2/OpenID Connect.
  • Authorization: The process of determining what a user or system is allowed to do. Spring Security supports role-based access control (RBAC), URL-based security, and method-level security.
  • Principal: An object representing the currently authenticated user or system.
  • Filters: Spring Security uses a chain of filters to handle security concerns such as authentication and authorization.

2. Implementing Spring Security in a Spring Boot Application:

Step 1: Add the Spring Security dependency
In your pom.xml file, add the following dependency:

YAML
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-security</artifactId>
</dependency>

Step 2: Configure Authentication
You can configure authentication in multiple ways, such as in-memory authentication, JDBC-based authentication, or LDAP-based authentication.

For example, to configure in-memory authentication, you can create a class that extends WebSecurityConfigurerAdapter and override the configure(AuthenticationManagerBuilder auth) method:

Java
@Configuration
public class SecurityConfig extends WebSecurityConfigurerAdapter {

    @Override
    protected void configure(AuthenticationManagerBuilder auth) throws Exception {
        auth.inMemoryAuthentication()
            .withUser("user")
            .password("{noop}password")
            .roles("USER");
    }
}

Step 3: Configure Authorization
You can configure authorization rules by overriding the configure(HttpSecurity http) method in your SecurityConfig class:

Java
@Override
protected void configure(HttpSecurity http) throws Exception {
    http.authorizeRequests()
        .antMatchers("/admin/**").hasRole("ADMIN")
        .antMatchers("/user/**").hasAnyRole("USER", "ADMIN")
        .antMatchers("/**").permitAll()
        .and().formLogin();
}

This example configuration allows access to the /admin/** URLs for users with the ADMIN role, /user/** URLs for users with USER or ADMIN roles, and permits all other URLs. It also configures form-based authentication.

Step 4: Customize Spring Security (Optional)
Spring Security provides numerous customization options, such as custom authentication providers, custom UserDetailsService implementations, method-level security using @PreAuthorize and @PostAuthorize annotations, and more.

For example, to use a custom UserDetailsService implementation, you can create a class that implements UserDetailsService and override the loadUserByUsername method:

Java
@Service
public class CustomUserDetailsService implements UserDetailsService {

    @Override
    public UserDetails loadUserByUsername(String username) throws UsernameNotFoundException {
        // Load user details from a database or other source
        // ...
        return new User(username, password, authorities);
    }
}

Then, you can configure your SecurityConfig class to use this custom UserDetailsService:

Java
@Configuration
public class SecurityConfig extends WebSecurityConfigurerAdapter {

    @Autowired
    private CustomUserDetailsService customUserDetailsService;

    @Override
    protected void configure(AuthenticationManagerBuilder auth) throws Exception {
        auth.userDetailsService(customUserDetailsService);
    }

    // Other configuration methods...
}

Spring Security offers numerous other features and customization options, such as CSRF protection, session management, method-level security, and more.

Please also read: https://medium.com/@minadev/authentication-and-authorization-with-spring-security-bf22e985f2cb

Question: Can I use both application.properties and application.yaml files simultaneously in a single Spring Boot application?

Yes you can use both application.properties and application.yaml files simultaneously in a single Spring Boot application. Spring Boot supports both file formats for specifying configuration properties.

When you have both files present in your application, Spring Boot will load the properties from both files, but the order of precedence is important. The properties defined in the application.yaml file will override the properties defined in the application.properties file if there are any conflicts or duplicates.

Here’s the order of precedence that Spring Boot follows when loading configuration properties:

  1. Command-line arguments
  2. Java System properties
  3. Operating System environment variables
  4. Random properties
  5. application.yml or application.yaml (in the configured search locations)
  6. application.properties (in the configured search locations)
  7. @Configuration classes and @PropertySource annotations

So, if you have a property defined in both application.properties and application.yaml, the value from application.yaml will take precedence.

It’s generally recommended to use either application.properties or application.yaml consistently throughout your application, as mixing the two formats can make the configuration harder to maintain and understand. However, if you have a specific requirement or preference, you can use both files simultaneously.

Keep in mind that when using both files, it’s important to ensure that there are no conflicting properties or unintended overrides. Additionally, you may want to consider separating your configuration into multiple files (e.g., application.properties for common properties and application-env.properties for environment-specific properties) for better organization and maintainability.

Question: As you are aware, implementing caching mechanisms on both the server and client side is essential for enhancing performance. Could you please discuss how you would integrate caching solutions such as Spring Cache or Redis on the backend?

Integrate caching solutions like Spring Cache and Redis on both the server-side (backend) and client-side for enhancing performance. Let’s start with the server-side caching:

  1. Spring Cache:
    Spring Cache is a caching abstraction layer that provides a consistent caching solution for various caching providers. It can be easily integrated into a Spring application, and it supports different caching providers like Redis, Ehcache, Guava, and more.

To integrate Spring Cache in your backend application, you need to follow these steps:

  • Add the required dependencies (e.g., spring-boot-starter-cache) to your project.
  • Configure the caching provider (e.g., Redis) in your application properties or configuration files.
  • Annotate the methods you want to cache with @Cacheable, @CachePut, or @CacheEvict.
  • Optionally, configure cache settings like time-to-live (TTL), cache eviction policies, and cache names.
  1. Redis:
    Redis is an open-source, in-memory data structure store that can be used as a distributed cache. It provides high performance and scalability, making it a popular choice for caching in backend applications.

To integrate Redis directly in your backend application, you can follow these steps:

  • Add the required dependencies (e.g., spring-boot-starter-data-redis) to your project.
  • Configure Redis connection details (host, port, password, etc.) in your application properties or configuration files.
  • Inject the RedisTemplate or StringRedisTemplate in your classes and use it to interact with Redis (get, set, delete, etc.).
  • Implement caching logic using Redis data structures like String, Hash, List, Set, etc.
  • Optionally, configure Redis settings like expiration policies, memory management, and clustering.

Now, let’s discuss client-side caching:

  1. Browser Caching:
    Modern web browsers have built-in caching mechanisms that can be leveraged for client-side caching. You can configure caching headers like Cache-Control, Expires, and ETag on the server-side to instruct the browser when and how to cache static resources like CSS, JavaScript, images, and other assets.
  2. Service Worker Caching:
    For more advanced client-side caching scenarios, like caching API responses or dynamic content, you can use Service Workers. Service Workers are scripts that run in the background of your web application and can intercept network requests, cache responses, and serve cached responses when the user is offline or on a slow network.

To implement Service Worker caching, you need to follow these steps:

  • Register a Service Worker in your web application.
  • Implement caching strategies in the Service Worker (e.g., cache-first, network-first, stale-while-revalidate).
  • Cache API responses or other dynamic content using the Cache API or the caches object.
  • Optionally, implement cache versioning, cache expiration, and cache invalidation strategies.

By implementing caching solutions on both the server-side and client-side, you can significantly improve the performance of your application by reducing server load, decreasing network latency, and providing a better user experience, especially for users on slow or unreliable networks.

Question: Can you tell me the role of basically Actuator and Spring Boot?

Spring Boot Actuator is a sub-project of the Spring Boot framework. Its role is to expose operational information about a running Spring Boot application — health, metrics, info, dump, env, etc. This helps you monitor and manage your application in production.

Some of the key roles and features of Spring Boot Actuator include:

  1. Health Checks: Actuator provides built-in health checks for monitoring the status of your application and its dependencies (database, disk space, etc).
  2. Metrics: It provides metrics data about your application’s running state (memory, CPU, HTTP requests, etc) which can be exported to monitoring systems.
  3. HTTP Endpoints: Actuator exposes a set of built-in HTTP endpoints (like /health, /metrics, /env) which return operational information in JSON or other formats.
  4. Auditing: It supports HTTP tracing and mapping capabilities for auditing incoming requests.
  5. Environment Properties: The /env endpoint exposes properties from Spring’s ConfigurableEnvironment.
  6. Customization: You can create custom health checks and metrics to expose domain-specific application information.

So in essence, Spring Boot Actuator plays the role of adding several production-grade monitoring and management capabilities to your Spring Boot application out-of-the-box via HTTP endpoints and JMX beans. This helps you implement effective monitoring, alerting and fault detection for your application.

Question: Do you have hands-on basically setting up basically CI pipelines using Jenkins and all that?

I have experience setting up CI/CD pipelines using Jenkins for various projects. The typical steps I follow are:

  1. Install and Configure Jenkins: I start by installing Jenkins on a server or containerizing it using Docker. I then configure global security, setup credentials, install required plugins like Git, Maven, Docker etc.
  2. Create Jobs: I create separate Jenkins jobs for different stages of the pipeline – e.g. one for build, one for unit tests, another for integration tests etc.
  3. Integrate with Source Control: I integrate the Jenkins jobs with the source code repository (e.g. GitHub) so that Jenkins can checkout the latest code changes automatically.
  4. Configure Build Steps: For build jobs, I configure steps to compile/package the application using the appropriate build tool like Maven or Gradle.
  5. Run Tests: For test jobs, I add steps to run the unit/integration test suites and configure reporting plugins to publish test reports.
  6. Static Code Analysis: I integrate static code analysis tools like SonarQube or CheckStyle in the pipeline.
  7. Deploy Artifacts: If tests pass, I configure deploy jobs to push artifacts to an artifact repository like Nexus or Artifactory.
  8. Containerize: For containerized apps, I build and push the Docker image in the pipeline.
  9. Environment Deployments: I setup deploy jobs for each environment (dev, staging, prod) to deploy the built artifacts/images.
  10. Notifications: I configure email/chat notifications on build failures or deployments.

Additionally, I’ve used advanced Jenkins capabilities like:

  • Parameterized Builds for customizing pipeline runs
  • Multibranch Pipelines for automated CI/CD across branches
  • Shared Libraries for reusable pipeline code
  • Integration with Cloud providers like AWS, GCP for dynamic slaves

I’m also experienced with managing and scaling Jenkins infrastructure using methods like master/agent architecture.

So in summary, I have extensive hands-on experience setting up robust, automated CI/CD pipelines on Jenkins covering all stages from code checkout to environment deployments. I’m very comfortable with configuring Jenkins per project requirements.

Question: What are the different approaches for testing a RESTful API developed in Java? Could you elaborate on the various types of tests such as limit tests, integration tests, and end-to-end tests? How would you automate these tests to ensure the reliability and quality of the API?

Testing is crucial for ensuring the reliability and quality of RESTful APIs, especially when developing them in Java. Here’s an approach I would typically follow for testing a Java-based RESTful API, covering different types of tests and automation:

  1. Unit Tests:
    • Write unit tests for individual components/classes of the API, such as controllers, services, repositories, and utility classes.
    • Use frameworks like JUnit, Mockito, and PowerMock for writing and running unit tests.
    • Test individual methods and functions for expected behavior, edge cases, and error handling.
    • Aim for a high code coverage percentage (e.g., 80% or more) to ensure thorough testing of the codebase.
  2. Integration Tests:
    • Write integration tests to ensure the correct interaction between different components of the API (e.g., controllers, services, databases).
    • Use frameworks like Spring’s MockMvc or RestAssured to simulate HTTP requests and validate responses.
    • Test scenarios involving multiple components working together, such as data persistence, security, and external service integration.
    • If using a database, consider spinning up an in-memory or containerized instance for integration testing.
  3. Contract Tests:
    • Implement contract tests (also known as provider/consumer tests) to validate the API’s adherence to its specified contract (e.g., OpenAPI/Swagger specification).
    • Use tools like Spring Cloud Contract or Pact to define and test the contract between the API provider and consumers.
    • Ensure that changes to the API don’t break existing consumers by continuously validating the contract.
  4. End-to-End (E2E) Tests:
    • Write end-to-end tests to simulate real-world scenarios and test the API’s behavior from the client’s perspective.
    • Use tools like Selenium, Cypress, or Postman for creating and running E2E tests.
    • Test complex workflows, authentication, authorization, and integration with other systems (e.g., message queues, external APIs).
    • Consider spinning up a complete test environment, including databases, caches, and other dependencies, for realistic E2E testing.
  5. Load and Performance Tests:
    • Conduct load and performance tests to ensure the API can handle expected levels of traffic and performance requirements.
    • Use tools like Apache JMeter, Gatling, or k6 to simulate concurrent users and measure response times, throughput, and resource utilization.
    • Test the API under various load conditions, including normal, peak, and extreme scenarios.
  6. Security Tests:
    • Perform security testing to identify and mitigate potential vulnerabilities in the API.
    • Use tools like OWASP ZAP or Burp Suite to test for common web application vulnerabilities (e.g., SQL injection, XSS, CSRF).
    • Test authentication, authorization, and access control mechanisms to ensure proper implementation.
    • Consider engaging with security experts or penetration testers for more thorough security testing.

To automate these tests effectively, you can:

  • Integrate test execution into your Continuous Integration/Continuous Deployment (CI/CD) pipeline using tools like Jenkins, GitLab CI/CD, or GitHub Actions.
  • Use build tools like Maven or Gradle to manage test dependencies and execute tests as part of the build process.
  • Implement test parallelization and distributed testing to reduce execution times for large test suites.
  • Leverage containerization (e.g., Docker) to create consistent and isolated test environments.
  • Set up monitoring and reporting for test results, code coverage, and other quality metrics.
  • Implement test data management strategies, such as using test data builders or in-memory databases, to ensure consistent and reliable test data.

By following this approach, you can ensure comprehensive testing of your Java RESTful API, covering various aspects like functionality, integration, contracts, performance, security, and more. Automated testing, combined with a robust CI/CD pipeline, can significantly improve the reliability and quality of your API while enabling faster and more frequent deployments.

Question: Please tell me how we can avoid the deadlocks in Java?

Deadlocks are a common concurrency issue that can occur in Java programs when two or more threads are waiting for each other to release resources they hold, creating a circular dependency. To avoid deadlocks in Java, you can follow these guidelines:

  1. Acquire Resources in the Same Order: If multiple resources are required by a thread, always acquire them in the same order across all threads. This prevents circular waiting, which is a necessary condition for deadlocks.
  2. Lock Ordering or Thread Ordering: Define a specific order for acquiring locks, and ensure that all threads follow this order. This can be achieved by assigning a unique priority or rank to each lock and acquiring locks in the increasing order of their priority/rank.
  3. Avoid Nested Locking: Avoid acquiring multiple locks within the same thread, especially when the order of lock acquisition is not clear or hard to maintain. If nested locking is unavoidable, be extremely careful and thoroughly analyze the code for potential deadlocks.
  4. Use Lock Timeout: When acquiring a lock, specify a timeout period instead of waiting indefinitely. This way, if a thread cannot acquire a lock within the specified timeout, it can back off and retry later, preventing a deadlock situation.
  5. Deadlock Detection and Recovery: Implement a deadlock detection mechanism in your application, and take appropriate actions when a deadlock is detected, such as aborting or restarting one or more threads involved in the deadlock.
  6. Avoid Unnecessary Locking: Only acquire locks when necessary, and release them as soon as possible. Minimize the critical sections that require locking to reduce the chances of deadlocks.
  7. Use Higher-Level Concurrency Constructs: Instead of manually acquiring and releasing locks, consider using higher-level concurrency constructs provided by Java, such as java.util.concurrent classes like Semaphore, CountDownLatch, and CyclicBarrier. These constructs can help manage resource acquisition and release in a more structured and safer manner.
  8. Proper Resource Cleanup: Always ensure that resources (locks, database connections, file handles, etc.) are properly cleaned up or released, even in the presence of exceptions or errors. This can be achieved using try-finally blocks or Java’s try-with-resources statement.
  9. Code Reviews and Testing: Regularly review your concurrent code for potential deadlock scenarios, and implement comprehensive testing strategies to detect and prevent deadlocks during development and testing phases.
  10. Simplify Concurrency Design: Whenever possible, simplify your concurrency design by minimizing shared resources and synchronization points. This reduces the complexity of your code and the chances of introducing deadlocks.

By following these guidelines and best practices, you can significantly reduce the risk of deadlocks in your Java applications. However, it’s important to note that completely avoiding deadlocks in complex concurrent systems can be challenging, and thorough testing and monitoring are essential to identify and resolve any remaining issues.

Question: In Java, if I have to store passwords and the options are between character arrays and strings, which one should be preferred?

When it comes to storing passwords in Java, character arrays should be preferred over strings for security reasons.

Here’s why character arrays are considered a better choice:

  1. Memory Representation: In Java, strings are immutable objects stored in the string constant pool. This means that when you create a new string, it may reference an existing string object in the pool. Character arrays, on the other hand, are mutable and are stored on the heap, making it less likely for their contents to be duplicated or shared between different objects.
  2. Garbage Collection: When a string object is no longer referenced, it becomes eligible for garbage collection. However, the contents of the string may still linger in memory until the garbage collector decides to clean them up. This creates a potential window of opportunity for an attacker to access the sensitive data. Character arrays, being mutable, can have their contents explicitly overwritten when no longer needed, reducing the risk of lingering data in memory.
  3. String Interning: Java performs string interning, which is an optimization technique that ensures that only one copy of each string value exists in memory. This can be a security risk for passwords because it increases the chances of sensitive data being duplicated and potentially exposed.
  4. Immutability: Strings are immutable, which means that every time you perform an operation on a string, a new string object is created. This can leave multiple copies of sensitive data in memory, increasing the potential attack surface.

Here’s an example of how to store a password securely using a character array:

public class PasswordManager {
    private char[] password;

    public void setPassword(char[] password) {
        this.password = password.clone(); // Create a copy to avoid reference sharing
    }

    public boolean authenticate(char[] enteredPassword) {
        // Compare the entered password with the stored password
        if (enteredPassword.length != password.length) {
            return false;
        }

        for (int i = 0; i < password.length; i++) {
            if (enteredPassword[i] != password[i]) {
                return false;
            }
        }

        // Clear the entered password from memory
        Arrays.fill(enteredPassword, '0');
        return true;
    }

    public void clearPassword() {
        // Clear the stored password from memory
        Arrays.fill(password, '0');
    }
}

In this example, the setPassword method creates a copy of the password character array to avoid reference sharing. The authenticate method compares the entered password with the stored password securely and clears the entered password from memory after the comparison. Finally, the clearPassword method overwrites the stored password with zeros before it goes out of scope.

It’s important to note that using character arrays alone is not a complete solution for secure password handling. You should also consider other security measures, such as salting and hashing passwords, using secure communication channels, and following industry-standard security practices.

Question: Given an array and a target sum of 9, how can we find all pairs in the array that add up to this target sum? Could you explain the approach using a hash map?

Sure, I can explain the approach to find all pairs in an array that add up to a given target sum using a hash map in Java.

import java.util.HashMap;
import java.util.Map;
import java.util.ArrayList;
import java.util.List;

public class PairSum {
    public static List<List<Integer>> findPairs(int[] nums, int target) {
        List<List<Integer>> pairs = new ArrayList<>();
        Map<Integer, Integer> map = new HashMap<>();

        // Iterate through the array
        for (int num : nums) {
            int complement = target - num;

            // Check if the complement exists in the map
            if (map.containsKey(complement)) {
                // If it does, add the pair to the result
                List<Integer> pair = new ArrayList<>();
                pair.add(complement);
                pair.add(num);
                pairs.add(pair);
            }

            // Update the map with the current number and its frequency
            map.put(num, map.getOrDefault(num, 0) + 1);
        }

        return pairs;
    }

    public static void main(String[] args) {
        int[] nums = {1, 2, 3, 4, 5, 6, 7, 8, 9};
        int target = 9;
        List<List<Integer>> pairs = findPairs(nums, target);
        System.out.println("Pairs that add up to " + target + ": " + pairs);
    }
}

Explanation:

  1. We define a method findPairs that takes an array of integers nums and a target sum target as input.
  2. We create an empty list pairs to store the pairs that add up to the target sum.
  3. We create a hash map map to store the numbers from the array and their frequencies.
  4. We iterate through the array using a for-each loop.
  5. For each number num in the array, we calculate its complement complement = target - num.
  6. We check if the complement exists in the map using map.containsKey(complement).
  • If the complement exists, it means we have found a pair that adds up to the target sum.
  • We create a new list pair and add both the complement and the current number num to it.
  • We add the pair to the pairs list.
  1. After checking for the complement, we update the map with the current number num and its frequency using map.put(num, map.getOrDefault(num, 0) + 1).
  • map.getOrDefault(num, 0) retrieves the current frequency of num in the map, or 0 if it doesn’t exist.
  • We increment the frequency by 1 and store the updated frequency in the map.
  1. After iterating through the entire array, we return the pairs list containing all the pairs that add up to the target sum.
  2. In the main method, we create an example array nums and a target sum target.
  3. We call the findPairs method with the array and target sum, and store the result in the pairs list.
  4. Finally, we print the pairs list.

Output:

Pairs that add up to 9: [[1, 8], [2, 7], [3, 6], [4, 5]]

This approach uses a hash map to store the numbers and their frequencies, allowing us to quickly check if the complement of a number exists in the map. The time complexity of this solution is O(n), where n is the length of the input array, since we iterate through the array once. The space complexity is O(n) as well, in the worst case where all elements in the array are distinct.

The solution provided is optimized in terms of time complexity, which is O(n). It achieves this by using a hash map to store the numbers and their frequencies, allowing us to check if the complement of a number exists in constant time O(1) on average.

However, there is a potential optimization that can be made to reduce the space complexity from O(n) to O(1) in the case where the input array has a limited range of values.

Here’s an optimized version of the solution that uses an array instead of a hash map to store the frequencies, assuming the input array contains only non-negative integers:

import java.util.ArrayList;
import java.util.List;

public class PairSum {
    public static List<List<Integer>> findPairs(int[] nums, int target) {
        List<List<Integer>> pairs = new ArrayList<>();
        int[] frequency = new int[target + 1]; // Array to store frequencies

        // Iterate through the array and record frequencies
        for (int num : nums) {
            frequency[num]++;
        }

        // Iterate through the array again and find pairs
        for (int num : nums) {
            int complement = target - num;

            // Check if the complement exists
            if (complement >= 0 && frequency[complement] > 0) {
                if (complement == num && frequency[complement] < 2) {
                    // Skip duplicate pairs for the same number
                    continue;
                }

                // Add the pair to the result
                List<Integer> pair = new ArrayList<>();
                pair.add(num);
                pair.add(complement);
                pairs.add(pair);

                // Decrement the frequencies to avoid duplicates
                frequency[num]--;
                frequency[complement]--;
            }
        }

        return pairs;
    }

    public static void main(String[] args) {
        int[] nums = {1, 2, 3, 4, 5, 6, 7, 8, 9};
        int target = 9;
        List<List<Integer>> pairs = findPairs(nums, target);
        System.out.println("Pairs that add up to " + target + ": " + pairs);
    }
}

Explanation:

  1. We create an array frequency of size target + 1 to store the frequencies of each number in the input array.
  2. We iterate through the input array and record the frequencies of each number in the frequency array.
  3. We iterate through the input array again and check for each number num if its complement complement = target - num exists in the frequency array.
  • If the complement exists (frequency[complement] > 0), we create a new pair and add it to the pairs list.
  • If the complement is the same number (complement == num), we skip the pair if the frequency of that number is less than 2 (to avoid duplicates).
  • After adding the pair, we decrement the frequencies of both num and complement to avoid duplicates in subsequent iterations.
  1. Finally, we return the pairs list containing all the pairs that add up to the target sum.

This optimized solution has a time complexity of O(n), where n is the length of the input array, as we iterate through the array twice. The space complexity is O(min(n, target+1)), which is O(1) if the range of values in the input array is limited (e.g., non-negative integers up to a certain value).

By using an array instead of a hash map to store the frequencies, we can reduce the space complexity from O(n) to O(1) when the range of values in the input array is limited. However, if the range of values is large or unbounded, the hash map solution would be more space-efficient.

References:

  1. “Interview at Caspex Bangalore.” CodeTechSummit, 2024, [codetechsummit.com/interview-at-caspex-bangalore/].
  2. “About Caspex.” Caspex Official Website, 2024, [caspex.com/about-us/].
  3. “Caspex Services.” Caspex Official Website, 2024, [caspex.com/services/].
Avatar

Neelabh

About Author

As Neelabh Singh, I am a Senior Software Engineer with 6.6 years of experience, specializing in Java technologies, Microservices, AWS, Algorithms, and Data Structures. I am also a technology blogger and an active participant in several online coding communities.

You may also like

Blog Design Pattern

Understanding the Builder Design Pattern in Java | Creational Design Patterns | CodeTechSummit

Overview The Builder design pattern is a creational pattern used to construct a complex object step by step. It separates
Blog Tech Toolkit

Base64 Decode

Base64 encoding is a technique used to encode binary data into ASCII characters, making it easier to transmit data over