OFF System 0.18.14 erschienen

  • TIPP

  • balu66
  • 2800 Aufrufe 21 Antworten

Diese Seite verwendet Cookies. Durch die Nutzung unserer Seite erklären Sie sich damit einverstanden, dass wir Cookies setzen. Weitere Informationen

  • OFF System 0.18.14 erschienen

    Owner Free Filesystem, kurz OFF, ist ein Open-Source-System zum Speichern und Abfragen von digitalen Daten in einem Netzwerk. Der Zugriff auf spezifische Daten ist dabei nur autorisierten Personen möglich. Es handelt sich dabei um den Proof of Concept für sogenannte Bright Nets. Das System soll die Anonymität seiner Benutzer gewährleisten, die in der heutigen Zeit von höchster Wichtigkeit ist, da in Netzwerken wie z.B. Bearshare, Kazaa, eDonkey, BitTorrent die Benutzer inzwischen häufiger Ziel von Strafverfolgung und zivilrechtlichen Forderungen werden.

    Konzept

    Alle Dateien im OFF-Netzwerk werden auf zufällig ausgewählten („randomized“) Datenblöcken gespeichert. Jede angebotene Datei wird vor dem Upload auf der eigenen Festplatte in etliche Einzelteile geteilt und dann mit anderen Dateien, die nichts mit der eigentlichen Datei zu tun haben, miteinander gemischt. Danach werden die einzelnen gemischten Dateien verschlüsselt und auf benachbarte Nodes verteilt. Jeder Teilnehmer des OFF-Netzwerks gibt je nach Bedarf eine bestimmte Menge Festplattenspeicher frei, auf welchem daraufhin die einzelnen, vermischten Dateien verschlüsselt gespeichert werden.

    Um eine Datei herunterladen zu können, braucht man einen Schlüssel, der in Form einer URL vergeben wird. Der Client lädt sich dann verteilte Kleinteile vom Swapspeicher der ausgewählten Nodes und setzt sie wieder zur einer kompletten Datei zusammen. Weil die Kleinteile mehrdeutig sind und als Kombination mehrerer Dateien oder Kleinteile entstehen, kann man keinerlei Rückschlüsse von den vorhandenen, gespeicherten Daten an sich, auf die ursprünglichen Dateien schließen.

    Das einzige Element, das die Zuordnung erlaubt, ist die URL der hochgeladenen Dateien. Diese sind jedoch durch die Bereitstellung der Keys über die Suche vollkommen optional („protected“). Es müssen einfach nur die URLs über andere Kanäle anonym ausgetauscht werden.

    Das Owner Free Filesystem verfolgt dasselbe Ziel wie das Freenet-Projekt, ist jedoch einfach zu Bedienen mit optimierter Geschwindigkeit.


    0.18.14

    Toggle listctrl lines from the view menu caused a crash under GTK, the
    wxw listcrtl sample seemed to have similar problems so the option is
    disabled under linux for now.

    Implemented a PtrMultimapH class which is similar to PtrMapH but allows
    colliding hash keys.

    The global srch results are now contained in a PtrMultiMapH object instead
    of a linked list. The filehash is used as the map key.

    Fixed bug where a paused download would not really be destroyed if cancelled
    and would reappear on restart.

    Search results for primary (user) srches are now stored in a PtrMultimapH
    object. These use the first descriptor hash as a key - may change to the
    filehash later.

    Each local URL now has an entry in the global srch results list, this means
    we don't need to access the local URLs whenever we receive a srch request
    and the local urls can be looked up in the same map as the srch results.
    However it does mean that list list needs to be kept updated as to the
    protected and searchable status of the locals.

    Srch records are now stored in PtrMap classes.

    Srch result owner ids are now held in an std::set<int> instead of a
    linked list.

    Fixed another bug that could prevent download blocks being preserved when
    the first hashmap tuple was missing.

    Fixed occasional crasher in mapping of concatenated inserts.

    CRITICAL: fixed crasher that affected only release builds with verbose
    mode on. Many thanks go to the user who put several hours effort into
    helping us find this shallow, yet hard to spot, bug.

    Downloads where corruption is detected in the descriptor hash will
    now be suspended with a warning in the status column. We'll handle this
    more intelligently later.

    New src files added:
    ptr_map.cxx,
    ptr_map.h (actually a version or two ago)


    Projekt Webseiten
    OFFSYSTEM: Owner Free File System
    SourceForge.net: OFF System
    wiki:
    Owner Free Filesystem - Wikipedia
    The OFFSystem - OFFWiki

    Deutsches OFF Benutzerhandbuch
    [url=http://board.planetpeer.de/index.php/topic,3560.0.html]Deutsches OFF-Benutzerhandbuch[/url]

    Videotutorial für OFF-Einsteiger
    RapidShare Webhosting + Webspace

    Aktuellste version gibts immer hier:
    SourceForge.net: Files
  • Version 0.18.15 ist draußen!

    Changes:

    0.18.15

    Fixed bug where the same URL could appear as several results in
    the same srch (was checking the wrong map for existing results).

    Tidied the recent STL code and removed the previous containing lists,
    which until now were #ifdef'd out.

    CRITICAL: fixed several crashers in the non-pool comms with firewalled
    nodes. This code is not executed often at present as most
    nodes are using the multiplexing pool cnxns, but will become more
    important as node numbers increase. Upgrades are strongly recommended
    as this somewhat prehistoric module will be tested further over the
    next few versions. (The current testing probably crashed a bunch
    of you - sorry about that.)
  • Edit:
    Neueste Version: 0.19.00
    Changes:
    Some tidying of the Node class.

    Added a checkbox to the Blockcache tab to set the cache limit as "hard".
    This will suspend downloads if the cache is oversize until the excess
    blocks can be trimmed out. We may later make the option refuse pushes
    as well, but lets see how it goes before we do that. WARNING: if the
    cache has too many preserved blocks then OFF will be unable to trim
    and if the hard limit is set downloads will just stop until the user
    intervenes to unpreserve some inserts, increase the cache size or
    disable the hard limit. This option should be used with caution.

    Mitigated a bug that causes inaccurate sorting of filesizes greater than
    2GB, this is a symptom of the fact that wxListCtrl on-board sorting
    only allows sorting by a long value. Hence such large sizes will still not
    be ordered correctly but at least now appear at the large files end of the
    sort. At somepoint, this will be changed to some kind of alphabetical sort
    that will sort all filesizes correctly.

    Declared stable, minor version number change.

    Switched to new makefile for building on linux. It will now link against system installed wxWidgets libaries. The path will be autodetected so most users just need to run "make"
    If you want to statically link against a local wxWidgets build like before run "make LOCALWXPATH=/path/to/wx/build/"
  • Habe noch eine Frage zur Anonymität im OFF-System:
    Man kann ja zu jedem User, der gerade on ist, die dazugehörige IP-Adresse sehen - wie kann es dann anonym sein???
    Es ist wohl nicht nachvollziehbar, wer welche Datein wo hoch- bzw. runtergeladen hat, aber es ist anscheinend kein Problem herauszufinden, wer hinter welchem Nickname steckt.

    Oder verstehe ich da etwas falsch?

    Mich interessiert das OFF-System sehr - frage mich, ob Download-Links hier im Board gerne angenommen werden, da der Download selber ja nicht gerade schnell ist.

    mfg
    Earthling
  • @ Earthling: wenn Anonymität gewährleistet ist (und nicht jeder die IP sehen kann wie du beschrieben hast) hab ich mit nem langsamen DL keine Probleme (gut, da kommts auch drauf an was es für ein DL ist, 7 GB und mehr würde ich damit wohl nicht ziehen wollen ^^)
    [SIZE="1"]User helfen Usern: Die FSB-Tutoren

    Sag nein zu Filehostern - [color="blue"]AOKHA[/color]

    Ups: Horst Evers MovieMix How High Generals + 2x Xvid Viele Klassiker-Games[/SIZE]
  • Ich arbeite mich auch gerade erst ins OFF-System ein.
    Die IP-Adressen werden im OFF-Programm selber zu den Nicks angezeigt - allerdings weiß ja keiner, was derjenige macht.

    Bei den Downloads ist es so, dass man die Links aus dem Prog selber exportieren kann, d.h. ich könnte dir einen Link schicken und der Download startet direkt. D.h. für den Downloader ist Anonymität gewährleistet.

    OFF käme für mich auch nur für kleinere Ups in Frage - Files mit ein paar GB wollte ich dort auch weder up- noch downloaden.

    Hoffe noch auf Posts von $pO0On, denke, dass er mehr dazu sagen kann.

    mfg
    Earthling


    edit:

    Habe gerade rausgefunden, dass nur dann jemand was mit den Links anfangen kann, wenn er OFF selber auf dem PC hat.
    Das ist ja bescheiden - dann kann man ja nur Links an bestimmte Personen geben.
    Weiß nicht, ob das Zukunft hat.....
  • Naja, aber einen Versuch wäre es wert, oder?

    Soweit ich das verstanden hab wird doch auf jedem Clienten ein bissel Speicherplatz für OFF genutzt um somit eine "virtuelle" Festplatte zu erstellen. Was passert denn dann wenn ein Stück von der DAtei ich ziehen will auf nem Rechner ist der ausgeschaltet ist oder den Clienten deinstalliert hat?
    [SIZE="1"]User helfen Usern: Die FSB-Tutoren

    Sag nein zu Filehostern - [color="blue"]AOKHA[/color]

    Ups: Horst Evers MovieMix How High Generals + 2x Xvid Viele Klassiker-Games[/SIZE]
  • Genau das wüßte ich auch gern!
    Die Dateien werden ja wohl nicht nur auf einem Rechner, sondern auf mehreren gespeichert.
    Soweit ich das sehe, gibt man genau so viel Speicherplatz ab, wie man an Daten hochgeladen hat.

    Wenn du Interesse daran hast, kannst du ja auchmal OFF installieren, dann kann man solche Sachen testen.

    Wie schon gesagt - das System ansich finde ich sehr interessant, mit dem Speed kann ich auch leben, solange es nicht einige GB sind.
  • OFF verschleiert nicht die Kommunikation unter Partnern, daher ist es für anonyme Lösungen noch einigermaßen schnell.

    Das besondere ist, dass man eine Datei in multi-use encoded blocks umwandelt: Die Blocks können dann, je nachdem wie man sie verwendet (anhand eines descriptor Blocks), eigentlich für alles mögliche genutzt werden. Es ist nur die Frage an welcher Stelle und wie. Jedenfalls ist ein Block daher nicht eindeutig einer bestimmten Datei zuzuordnen. Einen multifunktionalen Block solltest du also ohne Verschleierung laden dürfen, weil nicht sicher steht, wozu du ihn später verwenden wirst.

    Wenn du eine Datei einfügst, wird diese erstmal in solche Blöcke zerteilt und abgelegt. Danach versucht OFF, die Blöcke im Netzwerk bekannt zu machen, sodass du nicht die einzige Quelle bist. Die Blöcke sind größer als die ursprüngliche Datei, da ein Overhead entsteht.

    Ein Downloader braucht jetzt die Beschreibung, um die mehrdeutigen Blöcke zu beziehen und schließlich lokal zusammenzusetzen. Er lädt also erstmal die Blöcke direkt von anderen Knoten herunter, hat er alle, kann er daraus die gewünschte Datei gewinnen.

    Hier die sehr ausführliche Beschreibung
    OFFSYSTEM: Owner Free File System - Technology
  • Wirklich eine schöne Zusammenfassung Sonnentier! Ich schreibe nochmal was zu den Blöcken.
    Beim einfügen einer Datei wird die Datei und 128KB große Blöcke gespalten, die aber nicht nur aus dem Inhalt dieser einen Datei besteht, sondern zu 2/3 aus Blockschnipseln besteht, die man schon auf dem PC hatte. Man gibt ja ,wie schon erwähnt, beliebig viel Speicherplatz frei, wo dann die selbst generierten Blöcke und Blöcke von anderen Usern gespeichert werden. Diese Blöcke von anderen Usern kommen ganz von alleine auf deinen PC und diese werden genutz um die 2/3 des zu erstellenden Blocks zu füllen. Also kann man aus einem Block 3 verschiedene Dateien herstellen oder zumindest sind Teile von 3 verschiedenen Dateien enthalten, sodass bei einem Download der Datei nicht nachgewiesen werden kann, dass man mit diesem die Urheberrechtlich geschützte Datei XYZ runtergeladen hat.
    So also man hat jetzt eine Datei ins Netzwerk eingefügt und es wurden davon Blöcke erstellt. Als nächstes sollte man die Datei "dispersen" also im OFF Netzwerk verteilen, damit eben die Datei noch verfügbar ist, wenn man selber Offline geht. Dabei werden alle Blöcke einer Datei zu (derzeit) 3 anderen Nodes (Netzwerkteilnehmer) geschickt. Dadurch wird außerdem ein super Downloadspeed erreicht! :D
    Um eine Datei herunterzuladen braucht man aber als allererstes diese schon erwähnte OFF-URL. Diese wird zu jeder eingefügten Datei erstellt und in dem eigenen Clienten abgespeichert. Diese kann nun beliebig weitergegeben werden und man kann (derzeit) Dateien bzw. die URLs der Dateien über eine integrierte Suche finden.
    Diese OFF-ORL dient einmal dazu die richtigen Blöcke für eine Datei in dem Cache (freigegebener Speicher für OFF) der anderen Nodes zu finden und sie dient als eine Bauanleitung. OFF muss ja wissen in welcher Reihenfolge es die Schnipsel aus den Blöcken zusammensetzen muss um die gewünschte Datei zu erhalten.

    Durch diese Verteilung der Blöcke kommt es auch ganz gerne mal vor, dass man schon 30% der Datei, die man runterladen möchte auf dem PC hat. Das kann zum Beispiel dadurch kommen, dass man unter den 3 Leuten war, auf die der Uploader seine Datei verteilt hat. Oder jemand wollte seinen Cache leeren und hat einige seiner Blöcke auf andere Nodes verschieben lassen. Durch soetwas findet ein reger Austausch von Blöcken statt und die Internetverbindungen der Netzwerkteilnehmer sollen sehr Effizient ausgenutzt werden, da viele Leute Teile einer Datei haben und man so als Downloader von vielen Quellen gleichzeitig Downloaden kann.
    Ich selbst hatte schonmal eine DL Geschwindigkeit von 350KB/s mit DSL16000.

    Man muss aber bedenekn, dass OFF noch im der Beta Phase ist, weshalb auch im Wochentakt neue Versionen erscheinen. Trotzdem läuft es schon ganz gut.
    Edit: aktuellste Version ist z.Z die 0.19.04

    So das wars erstmal von mir. Ich hoffe es hilft jemandem OFF ein wenig mehr zu verstehen, auch wenn ich mich auf Grund von Müdigkeit und Alkohol nicht ganz präzise ausgedrückt hab. :löl:
    Alles zum nachlesen: OFFSYSTEM: Owner Free File System
  • Danke für die Erklärung!

    Wie OFF funktioniert ist mir inzwischen (weitestgehend) klar.

    Aber eine Frage habe ich noch:
    Was passiert, wenn ich (oder jemand anders, bei dem Blöcke gespeichert sind), die Blöcke lösche (z.B. durch formatieren o.ä.)?
    Kann man diese Datei dann überhaupt noch downloaden?
    Und kann ich, falls ja - wie - Dateien, die ich hochgeladen habe, wieder löschen?

    Würde mich freuen, wenn noch mehr Leute sich ins OFF-System einklinken würden - wie schon gesagt, mit gefällt das System bisher richtig gut.

    mfg
    Earthling
  • @ Earthling
    1) Also die Blöcke, die durch manuelles Löschen verloren gehen sind natürlich einfach weg. Man kann die Datei aber immernoch Downloaden, weil ja die Datei nach dem "dispersen" 4 mal im Netz vorhanden ist (1mal der Uploader und 3mal wurde die Datei ja verteilt). Und sobald jemand die Datei einmal heruntergeladen hat ist die Datei sogar 5 mal im Netz verfügbar. Also gehen beliebte Dateien erst recht nicht verloren.
    2) Also Datein, die man einmal ins Netzwerk eingefügt hat kann man (fast) gar nicht mehr löschen. Nach dem Uploaden der Datei hat man sowieso keine Macht mehr über diese Böcke und den Blocksatz den man selber hat könnte man nur manuell aus dem Blockcache Verzeichnis löschen. Ansonsten kann man eine Datei nur "trimmen", d.h. die Blöcke werden auf einen anderen Node verschoben.
  • Neueste OFF-Version 0.19.06

    Changes:

    Fixed bug in Preview where the RetrieveData object was created with
    the wrong type causing lots of "failed to load block" errors.

    Fixed bug where active nodelist requests would not be processed if
    there were enough nodes online that automatic nodelist requests were
    disabled.

    Split the data handling parts of OFFSocket out into a new class,
    OFFMsgBuffer which is inherited by OFFSocket.

    When a pool msg is received on a new channel, the task of assigning
    a channel to that logical connection is now done in the top level
    msg handler not the read handler. This should prevent skts clogging up
    the cnxn if a channel is not available.

    Removed the iterative part of assinging incoming pool msgs to waiting
    handlers as this should now happen instantly.

    Fixed bug where write items cancelled as a pool cnxn died would sometimes
    not tell their top level handler they were destined for the grave far
    sooner than expected Hence the handler would wait an inordinately
    long time for these deceased items.

    The read handler no longer creates a new OFFSocket object for each new
    pool msg. Instead, it re-uses an object until a msg generates a new
    logical connection in which case the socket object is passed off
    to the ACH and a replacement generated on the next pool msg. This
    should cut down on a lot of pointless new's and delete's.

    Reduced a lot of the disk access during the full cache check on startup
    by keeping a set of hashes found in the directory count. This seems
    to speed up slow checks by about a factor of 2.

    The Search tab now includes a "method" selector, with two options:
    i) Network is the current srch method, ii) Local should srch only
    the URLs in the local list and not send any requests to other nodes
    or return URLs which exist only in the results of existing srches.
    Hide/Show filtered results on local srches applies only to results
    filtered due to the incoming filter, the filter for locaal results
    is obviously ignored.
    Additional srch methods may appear later.

    Fixed bug where the implicit AND in srch terms was applied to filters
    as well which should be implicit OR.

    New src files added:
    offmsgbuffer.cxx,
    offmsgbuffer.h,
    offreadbuffer.cxx,
    offreadbuffer.h,
    msvc_constants.h


    Download:
    SourceForge.net: Files
  • Neueste OFF-Version 0.19.08

    Changes:

    Fixed security vulnerability when offset and stream length are
    extracted from URLs referring to concatenated inserts.

    Refactored the way that URLs are handled in the core. Instead of
    holding the complete text, the URL is broken down into its constituent
    parts and stored in a URLBase object which is inherited by URL,
    SrchResult and SrchResult_sm. Insert (and hence Download) contain
    a ptr to a URLBase object as that is more convenient for the present.
    All URL hashes are stored in binary, so this should reduce the RAM cost
    of the various URL containing objects by 100 bytes or more.
    This also has the advantage of cleaning up any malformed but functional
    URLs which have accumulated over time (some of which have bad chars in
    unused locations due to a bug that I think is fixed now!).
    Old URLs which start with "http://this_host/this_script..." will be converted
    to the current format. URLs from the old php script which start with
    "http://shock.douwd.org/shock/offsystem.php..." should be preserved.
    This which have somehow gotten mangled into something like this:
    "http://localhost:23402/shock/offsystem.php..." will be converted
    to the current format.

    Abstracted data calculated from the url (as opposed to actually being
    in the URL) into an InsertData class that is inherited by Insert and URL.

    CRITICAL: fixed crasher on scanning version data from very old nodes.
    Upgrade to this version strongly recommended since if a 10.08 turns
    up he will crash you! This affects versions since 0.19.02.

    Fixed bug where the block cache statistics were initialised if
    only a single URL was mapped. This may or may not have been responsible
    for blocks from preserved inserts being trimmed - I can't actually think
    how, but you never know.

    A temporary copy of the local url list is no longer required to map either
    a single URL or the entire list.

    Several of the columns marked (dbg) in Local URLs have been removed
    and the info they contained is now kept behind the scenes (in a PtrMap
    keyed by the List Rank number). This cuts down the resources used by
    local URL list entries a bit and also saves on scanning the info out
    of the list columns when it is required.

    Local URLs are now also contained in a PtrMultimapH keyed by the
    descriptor hash, this speeds up lookups based on the text of the URL.

    The file blacklist now uses binary instead of hex hashes.

    CRITICAL: fixed bug where any download limit would cause the node to
    go unresponsive after a short while. This affects versions 19.06 and 19.07.


    New src files added:
    insertdata.cxx,
    insertdata.h,
    offurlbase.cxx,
    offurlbase.h,
    srch_record.cxx,
    srch_result_list.cxx,
    srch_result_list.h

    Download:
    SourceForge.net: Files
  • Neueste OFF-Version 0.19.09

    Changes:

    Removed some extraneous RAM use from download block requests.

    Implmented a validity check for ID hashes in nodelists from other nodes,
    some older versions send garbled nodelists. This is in addition to the
    existing checks - seems some bad parameters could sneak through if
    what was mistaken for the hash was in fact 40 bytes long (a username
    in the observed occurrance of the bug).

    Fixed crasher in block pushes when a block is found to be no longer
    local and also happens to be the last block in the list.

    Fixed bug where imported blocks would not actually be moved! This
    resulted from a minor slip during the string conversion.

    Fixed crasher in GTK headless builds when finding mimetypes from file
    extensions. It seems the wxW mimetype manager does not like something
    about that build, so a local map is constructed from /usr/share/mime/globs.

    Downloads no longer have a single list of block requests, requests now
    fall into 3 categories: waiting, active and expired. All blocks start out
    as waiting, 10% of the blocks are then selected at random to be activated
    subject to minimum and maximum values of 50 and 1000 (parameterised).
    If a block has not been obtained after 50 requests (or one flood for
    a home block) it expires and is replaced by a new random waiting block.
    When the waiting list is empty it is refilled with any blocks in the
    expired list. This is intended to address 4 problems: i) excessive CPU
    use due to long lists of active blocks each of which must be checked
    when deciding which to request, ii) the record of which nodes a block
    was requested from can cause RAM use to increase without limit for
    large downloads for which few blocks can be obtained quickly, this
    record is deleted when a block expires, iii) the random order of block
    requests was lost when the requests were stored in a map instead of a
    linked list, iv) if a block cannot be obtained the frequency of requests
    for that block drops, hence the last few blocks of a download tend not
    to be requested very often resulting in a slow response if a home node
    finally obtains that block, the timer is reset when a block expires.
    (This last one is not likely to be the only reason for the "last few
    blocks" problem, so don't get too excited.) The number of blocks in each
    list is shown in a new column in Downloads which is only visible in
    Expert Mode. The specific numbers above are, as always, subject to change.
    In future versions, this will allow certain tuples to be downloaded first
    e.g. the first and last file tuples to enable previewing.

    Pings now return an additional status code for any existing pool cnxn,
    this is currently used for debugging but may be acted on in release
    builds later. For example, if a node says its pool is full further
    ping checks to that node will be delayed.

    Nodes now advertise an additional bucket radius in headers, this
    is the "topological radius" (TBR) which is the average distance of the
    closest 5 nodes to that node. The TBR is not used for anything yet, but
    later will be used to restrict requests and pushes to nodes with large
    bucket radii.

    The Send File option in the nodelist menu has been moved to Expert Mode.

    Semantic GUI changes: "Clear" and "Clear All" buttons in srch have been
    renamed to "Close Tab" and "Close All".

    Download:
    SourceForge.net: Files
  • Neueste OFF-Version 0.19.10

    Changes:

    If a node sends a pool full code we don't check again for one hour
    Previously it was one minute.

    Oops, the updates for the "Waiting/Active/Expired" column in downloads
    were only compiled for debug builds. Fixed.

    Active block requests for downloads now initially expire after 3 requests.
    Once all blocks have been obtained or expired this is increased by 1
    up to a maximum of 50. Downloads will thus charge through the blocks
    asking the closest one or two nodes to pick up the easy-to-find ones
    and later search more exhaustively for blocks which are harder to find.
    The number of cycles and the current expire point are now shown in brackets
    in the same column as the waiting, active and expired requests.

    Headless builds no longer require wxWidgets.

    PROTOCOL CHANGE: Block requests have been completely refactored. The
    requester now sends a list of blocks in a request. The requestee then
    returns first a list of the blocks that they do not have (if any), and
    then each of the blocks they do have followed by a finish code. The
    finish code can be sent before all the blocks are done in the case of
    an error or shutdown. Previously this was done in a request-response
    sequence for each of the blocks in a request group. This should
    significantly reduce the number of messages in the protocol and increase
    efficiency since groups of blocks are sent in sequence instead of
    having to wait for a request between each block. NOTE: this method is
    supported in this version but not actually used, it will be rolled out
    properly once it has been fully tested in the wild. The old method will
    also be supported for the foreseeable future for backwards compatibility.

    The "Set Colours" menu (View) is now only shown in developer mode.

    Descriptors and hashmap blocks are now dispersed before file blocks
    when an insert is dispersed. Previously blocks were simply dispersed in
    sequence meaning that hashmap blocks were dispersed last.

    Removed 2 columns from the Downloads tab. i) "Requested (secs, dbg)", this
    info is now stored behind the scenes, ii) "DL Sum (dbg), this was never
    actually used for anything.

    Fixed bug where download ETAs appear as "Error: -1 -1" on some platforms.

    Fixed crasher when a read objects detects an error on a socket fd
    immediately after passing a message to a handler.

    Download:
    sourceforge.net/project/showfiles.php?group_id=96735
  • Neueste OFF-Version 0.19.13

    Changes:

    Fixed bug where the "Waiting/Active/Expired" column in downloads was
    not kept current if the waiting and expired sets were empty.

    The SecReq class (for requesting blocks) now inherits AsyncCnxnBase. Over
    time this class will be integrated into AsyncCnxn which handles all other
    connection types, but that's not easy right now. Also did a lot of tidying
    in SecReq.

    Fixed display bug where a disperse that was paused and resumed would continue
    to show paused status (without any updates) despite actually resuming.

    Refactored top level connection handler iteration functions so that
    most iterations are waiting for a read or a write to complete. The next
    action is taken immediately on r/w completion instead of waiting for another
    round of iterations. This makes message handling more efficient

    Incoming chat and global msgs no longer require the creation of a (rather
    large) general cnxn handler object.

    When most network reads and writes complete, the top-level handler is now
    iterated immediately. This makes the network code more efficient and also
    fixes a bug where subsequent messages could overwrite each other if they
    arrived too quickly and the other end was not expecting a response in between.

    Most connection types are now added directly to the ACH thread list
    (bypassing the update list) with an initial iteration thrown in to boot.
    Later all connection types will be handled this way, but for now there
    are 7 outgoing types that for various reasons cannot.

    Fixed bug where srch results would sometimes not be displayed.

    More bug fixes in new block transfer protocol - still not generally enabled,
    further testing is required.

    Download:
    SourceForge.net: Files
  • Neueste OFF-Version 0.19.14

    Changes:

    Moved handling of all scheduled ACH tasks to a dispatcher in the main
    worker thread funcs.

    FinCnxns no longer use an update list as they are always added from the
    ACH thread.

    If the node is closed while a retrieve is in progress, the retrieve will
    now finish elegantly and remove the temp file (instead of charging off like
    an out-of-control freight train until the program either crashes or exits).

    Fixed fatal exception on Blacklist by Filename. Bug was due to mis-handling
    of string::npos.

    Command line syntax for chat is now only "message <node rank> <text>" the options
    using host and port instead of rank have been removed.

    Fixed a couple of overflow vulnerabilities in parsing of http headers.

    Fixed mutex deadlock when duplicate nodes are found and the one which is
    removed has a pool cnxn.

    Those last 7 async types are now added directly and the AsyncCnxn update
    list has been removed.

    Config files are now saved from the ACH thread instead of the NLMan thread.

    Updates for the node readout in the status bar are now posted from the
    ACH thread instead of being done by the display timer.

    Pinging all online nodes (i.e. on port change) and sending global messages
    no longer require the creation of a temporary nodelist.

    Changed most of the sscanf calls to use istringstream instead, which
    is supposedly safer and more secure. As always, some are left over and will
    be cleared up as and when.

    The mutex lock on the main nodelist has been disabled as all access should
    now be from the ACH thread. If all goes well, it will be removed completely.

    Node IDs from headers are now stored as binary instead of hex.

    Outgoing headers are now stored in an std::string instead of a
    somewhat oversize char array.

    The url in Insert is now an std::string.

    Fixed bug where the RSA key verify time was updated on reading in nodes,
    this meant that RSA keys were not rechecked after the specified interval.

    Changed all config file writing to use a stringstream as a buffer. This
    allows data to be buffered until a reasonable amount is waiting to be
    written and also should fix a format problem in some locales where
    float values would be truncated at the decimal point on loading config.

    strcmp() and strcpy() have been expunged from the src tree.

    sprintf() is no more, except for one instance in some almost-never-used
    debugging code.

    New block protocol is enabled - let's see what happens.

    Download:
    SourceForge.net: Files
  • Neueste OFF-Version 0.19.18

    Changes:

    Fixed minor bug where the old default unset IP string would be interpreted
    as a valid IP (which is why it was changed).

    Pool cnxn objects no longer keep a copy of the corresponding node object,
    this saves us the trouble of keeping the copy updated and will reduce
    RAM use if there are a lot of cnxns. The pool cnxn will look up node details
    from the main nodelist when they are needed, which is generally only on
    receiving a new message.

    Most high level network tasks no longer create a temporary node
    object.

    Fixed bug in pings where the remote flag for establishing pool cnxns
    was not properly heeded.

    Fixed bug where block reads in 2way comms re not flagged with the
    correct download rank.

    Fixed three block request bugs: the first caused too many requests to be
    added to a group if there were a lot of downloads (i.e. >100), the
    second caused too few requests to be added to a group as the total
    request count to a node was mistaken for the request count for a each
    download, the third caused too few requests if complete downloads were
    present as these were included in the active dl count. These affect both
    block request methods.

    Inverted the colours on the bucket gauge to better demonstrate that
    a full bucket is good.

    Removed the update list for noslot msgs from DownloadManager as all access
    is now through the ACH thread.

    Refactored the shutdown procedure. The main thread now stops the ACH
    thread as soon as possible and then takes over its tasks for a maximum of
    30 seconds before killing everything. Previously the main thread would
    attempt to wait for the ACH thread to stop and clean up before continuing.
    This is hopefully a catch-all solution for the various shutdown crashers
    where the wait would timeout and the ACH thread would attempt to use
    a resource that the main thread had destroyed.

    Small change to the block request system when a request is received
    for a block we don't have: counts of better and worse nodes are examined
    when the block fits into the local bucket but is not closer than the TBR,
    instead of only when no buckets are found for that block. This seemed to
    help with the last blocks problem in tests.

    Fixed small uninitialised memory read in SrchResult class - probably
    not too serious.

    When the main server port is unavailable the same port will be tried
    60 times at 1 second intervals before giving up and not starting the
    server. Previously other port numbers would be tried. Should the original
    behaviour be desired, building with the defined constant PORT_FLIP_ON will
    provide it.

    Disperses now happen asynchronously in the ACH thread, the disperse
    thread has been removed.

    Fixed bug where targetted store did not include any daisy-chained hashmap
    hashmap blocks of the target insert.

    Fixed bug where the gui event handler would get hammered when the bucket
    size was changed as an update was posted to for each block, updates are
    now only posted after each blocklist and on completion of the changes.

    Trim pushes are now added from the ACH thread instead of the NLMan thread.
    Scanning for unpreserved blocks is an intensive process as so cannot
    happen in the ACH thread as this would hold up essential tasks.

    The scanning phase of trimming is now launched from the ACH thread in
    the user task threadpool. The NLMan thread has been removed. (Annoying
    to have to send an automatic task to the user threadpool, but we are
    running out of threads.

    Download:
    SourceForge.net: Files
  • Neueste OFF-Version 0.19.20


    Changes:
    0.19.19

    CRITICAL: Fixed double mutex lock on disperse. Caused when the disperse thread
    was merged with the ACH thread last version.


    Changes:
    0.19.20

    Fixed bug in disperses of version 1 URLs where the presence of hashmap
    blocks was assumed. This produced incorrect progress display (>100%) and
    false errors.

    Fixed yet another instance of a socket object that could leak during the
    shutdown procedure - hopefully the last one.

    Scheduled pings and nodelist requests no longer need a copy of the
    relevant node object, instead they keep the rank in the main nodelist
    and look up the node details when necessary.

    CRITICAL: Added a short delay (5 seconds) between closing an incoming
    ping socket and opening the socket for a ping back (firewall check).
    This seems to clean up a large number of bizarre errors that regularly
    occur on pingback sockets and - no doubt - if I knew anything about TCP,
    I would understand why. This is critical since it can cause many nodes
    to appear walled, when actually they are not.

    Fixed memory leak when an attempt is made to disperse an invalid URL
    (not that this would happen very often).

    NON_RELEASE_BUILDS ONLY: Fixed shutdown crasher due to static objects
    being destroyed in the wrong order.

    Download:
    SourceForge.net: Files