I have aquired several very large files. Specifically, CVSs of 100+ GB.

I want to search for text in these files faster than manually running grep.

To do this, I need to index the files right? Would something like Aleph be good for this? It seems like the right tool…

https://github.com/alephdata/aleph

Any other tools for doing this?

  • yaroto98@lemmy.org
    link
    fedilink
    English
    arrow-up
    15
    ·
    2 months ago

    Done this with massive log files. Used perl and regex. That’s basically what the language was built for.

    But with CSVs? I’d throw them in a db with an index.

  • jonne@infosec.pub
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    2 months ago

    Really depends on what data it is and whether you want to search it regularly or just as a one time thing.

    You could load them into an rdbms (MySQL/Postgres) and have it handle the indexing, or use python tools to process the files. Something like elasticsearch could work too.

    If it’s just a one time thing grep is probably fine tho.

    Aleph could work as well but I have no experience with it.

    I guess it depends on how much time you want to invest in setting something up versus how much time you’d lose waiting for grep to finish (if you only need to search a certain column, you can create an index with just that column using awk, search that index file, then extract the full line from the source file based on that result, but at that point you’re basically creating a new database engine).

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    2 months ago

    Are you looking for specific values in some field in this table, or substrings in that field?

    If specific values, I’d probably import the CSV file into a database with an column indexed on the value you care about.

  • irotsoma@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    I’ve used java Scanner objects to do this extremely efficiently with minimal memory required even with multiple parallel searches. Indexing is only necessary if you want to search for information many times and don’t know what exactly the search will be. For one time searches, it’s not going to be useful. Grep honestly is going to be faster and more efficient for most one time searches.

    The initial indexing or searching of the files will be bottlenecked by the speed of the disk the files are on, no matter what you do. It only helps to index because you can move future searches to faster memory.

    So it greatly depends on what and how often you need to search and the tradeoff is memory usage, but only for multiple searches of data you choose to index from the files in the first pass.

  • TiTeY`@jlai.lu
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    2 months ago

    If CSV entries are similar, you can try Opensearch or Elasticsearch. It’s great for plain text search (with Lucene)

  • nelson@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 months ago

    RDBMS shines on getbyId queries. Queries where the value starts with should also work well. But queries where the word is in the middle of the value or column generally don’t perform well. Since it’s just for personal use that might not matter too much. If you’re querying on exact values it’ll go pretty smooth. If you’re querying on ‘deniro’ while the value contains ‘bob deniro’ and others it’ll be less performant. But it’s possible it works well enough for your case.

    Elasticsearch is well known for text searches and being incredibly flexible with queries and filtering. https://www.elastic.co/

    Manticore is one that’s been on my check-it-out for I don’t know how long. It looks great imo: https://manticoresearch.com/

    Open search: https://opensearch.org/

    Disclaimer: I haven’t really used any RDBMS systems extensively for years so it’s possible there are some that added support for full text searches being more performant.

    Aleph also seems to be able to cross reference data between documents. I don’t think any of the ones listed above do this. But I also don’t know if this is part of your requirements.