Dataset Viewer
Auto-converted to Parquet Duplicate
id
stringlengths
9
15
text
stringlengths
0
1.09M
context_0
The public ledger adds a new entry every 3 - 5 seconds . So I am worried it might grow too big for normal users to run a node . How big is the current ledger and how much is it expected to grow each year ?
context_1
I 'm not seeing what I 'd expect to see from the effects API endpoint . I created a test account ` GBFYP5PVDTKVH2WYR73PUMP463SHU726S5KN6JJ632M7SG7OGLU3CFKN ` and funded it . Then I issued [ this transaction][1 ] to change set the options ` AuthorizationRequired ` and ` AuthorizationImmutable ` . The value ` setFlags=5 ` indicates that these values were set . But when I explore the [ effects for this account][2 ] , I see . - ` account_created ` - ` signer_created ` - ` account_flags_updated ` with ` auth_required_flag = true ` - ` signer_updated ` with no perceivable change to the signer . Firstly , why is there no mention of the setting of ` auth_immutable ` ? Secondly , why does the signer get updated when it does n't change ? [ 1 ] : https://www.stellar.org/laboratory/#xdr-viewer?input=AAAAAEuH9fUc1VPq2I%2F2%2BjH89uR6f16XVN8lPt6Z%2BRvuMumxAAAAZABtt1oAAAABAAAAAAAAAAAAAAABAAAAAAAAAAUAAAAAAAAAAAAAAAEAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHuMumxAAAAQIHGSm0bd0tD9Av1DdnWcc3oZuI0XQkOLRjojjtoeOiDQeKjl%2FYsO%2B5DOXrxYEtnjsgrkjJ5Rs2blH56dUAnzgg%3D&type=TransactionEnvelope&network=test [ 2 ] : https://horizon-testnet.stellar.org/accounts/GBFYP5PVDTKVH2WYR73PUMP463SHU726S5KN6JJ632M7SG7OGLU3CFKN/effects
context_2
I have three stellar - core instances and one Horizon server . The Horizon server is connected to the first of three nodes . All goes well for a while , until I start firing many , many transactions at Horizon ( Stellar advertises being able to process 1k transactions / s , so I 'm firing 1000 individual transactions at Horizon . ) At this point , I start receiving 400-bad request errors from Horizon , and looking at the Horizon logs shows many 504 errors . When I do a " health check " on Stellar , all three nodes are synched , and no nodes say that other nodes are missing . In other words , everything looks dandy from the Stellar point of view . Do I need to run multiple Horizon instances to be able to fire 1k transactions at the thing ? I 'm running the three nodes and Horizon on a machine with 20 cores and SSD . Stellar and Postgresql are all writing to the SSD .
context_3
I ran a basic private network with the line ` COMMANDS=["upgrades?mode = set&upgradetime=1970 - 01 - 01T00:00:00Z&basefee=0&basereserve=0 " ] ` inside of my ` stellar-core.cfg ` file on * * all * * of my nodes ( 5 of them ) , so I 'm expecting the network to treat all transactions with 0 ` baseFee ` and 0 ` baseReserve ` . However , when I run ` stellar - core --c info ` , I can see that ` baseFee=100 ` and ` baseReserve=100000000 ` under ` ledger ` , which I 'm assuming is the default amount . So either that the command I ran did n't get applied correctly or there is a difference between the ledger 's ` baseFee ` and ` baseReserve ` . What 's going on ?
context_4
I 'm having trouble explaining what 's going on with my bid and offers : Issuer : GD57JZ42YKIEDLCFIZD2BW6LIYN7KIOEMFSIL37WZ5RS7DPLEW6DOOWG Asset : AXX Test1 : GC5RQZPRI2SXMOL5JP2QTJXO72IAJLGMNU7TG5ADU6OMYDEJI3APJTZI Balance : 444 AXX Test2 : GBPJWQBGYSIWHFDLDKNO4XMHUC7RUOX7GBORLG4ZKUFGKWMPW77OIHXA Balance : 77 AXX They each create an offer : Test1 : sell 20 AXX Test2 : buy 3 AXX What I expected to happen is : Test1 balance : 441 AAX ( 444 - 3 ) Test2 balance : 80 AAX ( 77 + 3 ) Sell offer : 17 AXX ( 20 - 3 ) What actually happened was : Test1 balance : 424 AAX ( 444 - 20 ) Test2 balance : 97 AAX ( 77 + 20 ) Buy offer : 2.9 AAX ( 3 - 0.1 ) ... So my questions are : - Why did Test2 get 20 AAX when it only wanted 3 ? - Why is there a buy offer of 2.9 AAX ? I 'm hoping this is just my misunderstanding of the documentation .
context_5
I am trying to send a data hashed using SCryptUtil provided by com.lambdaworks.crypto.SCryptUtil , convert it into array of bytes and send it to the stellar network using ManageDataOperation . Builder My code is as follows String hash = SCryptUtil.scrypt("HashData " , 16384 , 8 , 1 ) ; System.out.println(hash ) ; String hashedData = hash ; byte [ ] hashedDataByte = hashedPass.getBytes ( ) ; ManageDataOperation . Builder updateHashedData = new ManageDataOperation . Builder("Hashed Data " , hashedDataByte ) ; Transaction tt = new Transaction . Builder(sourceAccount ) .addOperation(updateHashedData.build ( ) ) .addMemo(Memo.text("Test Transaction")).setTimeout(1000).build ( ) ; The data is not sent to the server . But when I use a simple string for the " hash " variable like " Password " , it gets sent . What am I missing here ? Is there a way to send long hashcodes to stellar account in this way ?
context_6
When I include both ` stellar - hd - wallet ` and ` stellar - sdk ` in my project I get errors when requiring them both as stellar - base is included twice which also imports the generated XDR types twice causing the below error . Any suggestions ? ` ` ` Error : XDR Error : Value is already defined at TypeBuilder.define ( /code / blockchain - client / node_modules / js - xdr / lib / config.js:341:17 ) at TypeBuilder.typedef ( /code / blockchain - client / node_modules / js - xdr / lib / config.js:247:14 ) at /code / blockchain - client / node_modules / stellar - hd - wallet / node_modules / stellar - base / lib / generated / stellar - xdr_generated.js:1:259 at Object.config ( /code / blockchain - client / node_modules / js - xdr / lib / config.js:38:5 ) at Object. ( //code / blockchain - client / node_modules / stellar - hd - wallet / node_modules / stellar - base / lib / generated / stellar - xdr_generated.js:1:234 ) ` ` ` ` These are the related deps in my package.json : ` ` ` " stellar - hd - wallet " : " 0.0.6 " , " stellar - sdk " : " ^0.9.2 " ` ` `
context_7
Does one ledger hold multiple transactions from different accounts ? By looking at the single ledger , I see only few transactions https://stellarchain.io/ledger/17057632 In the above ledger sequence , there are only 2 transactions . I assume the ledger can hold more transactions . Second question : As a DAPP developer ( token creator , contract creator etc ) , do I need to worry about ledger feature such as entry , sequence . I view ledgers as a single database where individual ledger are chained .
context_8
Background : - Using ` satoshipay / stellar - horizon:0.11.1 ` for Horizon Stellar Core seems to be running . Based on logs , we get the latest sequence number and ledger : 2018 - 04 - 18T10:31:32.695 GDYMW [ Ledger INFO ] Got consensus : [ seq=8508865 , prev = fc51bd , tx_count=7 , sv : [ txH : c3108f , ct : 1524047493 , upgrades : [ ] ] ] 2018 - 04 - 18T10:31:37.620 GDYMW [ Herder INFO ] Quorum information for 8508864 : { " agree":3,"disagree":0,"fail_at":2,"hash":"273af2","missing":0,"phase":"EXTERNALIZE " } 2018 - 04 - 18T10:31:37.630 GDYMW [ Ledger INFO ] Got consensus : [ seq=8508866 , prev=1e8788 , tx_count=7 , sv : [ txH : 6deefc , ct : 1524047497 , upgrades : [ ] ] ] 2018 - 04 - 18T10:31:37.630 GDYMW [ Ledger INFO ] Got consensus : [ seq=8508866 , prev=1e8788 , tx_count=7 , sv : [ txH : 6deefc , ct : 1524047497 , upgrades : [ ] ] ] Setup Horizon to read from Stellar Cores Database . Horizon ENV Variables : INGEST = true CATCHUP_RECENT = 1440 HISTORY_RETENTION_COUNT = 1000 STELLAR_CORE_DATABASE_URL = STELLAR_CORE_URL = DATABASE_URL = Stellar Core and port is reachable from the Horizon machine . Horizon runs , but I do n't see ` core_latest_ledger ` being updated : " horizon_version " : " " , " core_version " : " v9.1.0 " , " history_latest_ledger " : 0 , " history_elder_ledger " : 0 , " core_latest_ledger " : 1 , " core_elder_ledger " : 0 , " network_passphrase " : " Test SDF Network ; September 2015 " , " protocol_version " : 9 Checked the Horizon logs : time="2018 - 04 - 18T10:38:28Z " level = error msg="import session failed : failed to load header : sql : no rows in result set " pid=1 Checked here , and it looks like someone else encountered the problem : https://stellar.stackexchange.com/questions/688/horizon-not-synchronizing-import-session-failed-failed-to-load-header-sql-n I cleared the Horizon Database and restarted Horizon . Horizon Database was reinitialized and horizon is running again , but I still encountered the same error : level = error msg="import session failed : failed to load header : sql : no rows in result set " pid=1 I ran through the steps here : https://www.stellar.org/developers/horizon/reference/admin.html#correcting-gaps-in-historical-data I did n't encounter any errors , but after restarting Horizon I encountered the same error : level = error msg="import session failed : failed to load header : sql : no rows in result set " pid=1 My last resort would be to clear the ` stellar - core ` database and make sure it has no gaps . Any advice on the next steps ?
context_9
I 'm not seeing what I 'd expect to see from the effects API endpoint . I created a test account ` GBFYP5PVDTKVH2WYR73PUMP463SHU726S5KN6JJ632M7SG7OGLU3CFKN ` and funded it . Then I issued [ this transaction][1 ] to change set the options ` AuthorizationRequired ` and ` AuthorizationImmutable ` . The value ` setFlags=5 ` indicates that these values were set . But when I explore the [ effects for this account][2 ] , I see . - ` account_created ` - ` signer_created ` - ` account_flags_updated ` with ` auth_required_flag = true ` - ` signer_updated ` with no perceivable change to the signer . Firstly , why is there no mention of the setting of ` auth_immutable ` ? Secondly , why does the signer get updated when it does n't change ? [ 1 ] : https://www.stellar.org/laboratory/#xdr-viewer?input=AAAAAEuH9fUc1VPq2I%2F2%2BjH89uR6f16XVN8lPt6Z%2BRvuMumxAAAAZABtt1oAAAABAAAAAAAAAAAAAAABAAAAAAAAAAUAAAAAAAAAAAAAAAEAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHuMumxAAAAQIHGSm0bd0tD9Av1DdnWcc3oZuI0XQkOLRjojjtoeOiDQeKjl%2FYsO%2B5DOXrxYEtnjsgrkjJ5Rs2blH56dUAnzgg%3D&type=TransactionEnvelope&network=test [ 2 ] : https://horizon-testnet.stellar.org/accounts/GBFYP5PVDTKVH2WYR73PUMP463SHU726S5KN6JJ632M7SG7OGLU3CFKN/effects
context_10
Background : - Using ` satoshipay / stellar - horizon:0.11.1 ` for Horizon Stellar Core seems to be running . Based on logs , we get the latest sequence number and ledger : 2018 - 04 - 18T10:31:32.695 GDYMW [ Ledger INFO ] Got consensus : [ seq=8508865 , prev = fc51bd , tx_count=7 , sv : [ txH : c3108f , ct : 1524047493 , upgrades : [ ] ] ] 2018 - 04 - 18T10:31:37.620 GDYMW [ Herder INFO ] Quorum information for 8508864 : { " agree":3,"disagree":0,"fail_at":2,"hash":"273af2","missing":0,"phase":"EXTERNALIZE " } 2018 - 04 - 18T10:31:37.630 GDYMW [ Ledger INFO ] Got consensus : [ seq=8508866 , prev=1e8788 , tx_count=7 , sv : [ txH : 6deefc , ct : 1524047497 , upgrades : [ ] ] ] 2018 - 04 - 18T10:31:37.630 GDYMW [ Ledger INFO ] Got consensus : [ seq=8508866 , prev=1e8788 , tx_count=7 , sv : [ txH : 6deefc , ct : 1524047497 , upgrades : [ ] ] ] Setup Horizon to read from Stellar Cores Database . Horizon ENV Variables : INGEST = true CATCHUP_RECENT = 1440 HISTORY_RETENTION_COUNT = 1000 STELLAR_CORE_DATABASE_URL = STELLAR_CORE_URL = DATABASE_URL = Stellar Core and port is reachable from the Horizon machine . Horizon runs , but I do n't see ` core_latest_ledger ` being updated : " horizon_version " : " " , " core_version " : " v9.1.0 " , " history_latest_ledger " : 0 , " history_elder_ledger " : 0 , " core_latest_ledger " : 1 , " core_elder_ledger " : 0 , " network_passphrase " : " Test SDF Network ; September 2015 " , " protocol_version " : 9 Checked the Horizon logs : time="2018 - 04 - 18T10:38:28Z " level = error msg="import session failed : failed to load header : sql : no rows in result set " pid=1 Checked here , and it looks like someone else encountered the problem : https://stellar.stackexchange.com/questions/688/horizon-not-synchronizing-import-session-failed-failed-to-load-header-sql-n I cleared the Horizon Database and restarted Horizon . Horizon Database was reinitialized and horizon is running again , but I still encountered the same error : level = error msg="import session failed : failed to load header : sql : no rows in result set " pid=1 I ran through the steps here : https://www.stellar.org/developers/horizon/reference/admin.html#correcting-gaps-in-historical-data I did n't encounter any errors , but after restarting Horizon I encountered the same error : level = error msg="import session failed : failed to load header : sql : no rows in result set " pid=1 My last resort would be to clear the ` stellar - core ` database and make sure it has no gaps . Any advice on the next steps ?
context_11
I 'm trying to find a way to keep Stellar disk usage in check . I only care about the last 24 hours or so of transactions and do n't want to over - provision the instance I started a docker container using the quick start docker run --rm -it -p " 8000:8000 " -v " /home / ec2-user / stellar:/opt / stellar " --name stellar stellar / quickstart --pubnet and updated ` ~/stellar / core / etc / stellar - core.cfg ` , adding these lines AUTOMATIC_MAINTENANCE_PERIOD=60 AUTOMATIC_MAINTENANCE_COUNT=500 CATCHUP_COMPLETE = false which after reading [ this question](https://stellar.stackexchange.com / a/1394/2323 ) leads me to believe it will constrain the disk , yet the disk usage of my 30 GB disk continues to climb until it consumes 100 % . [ ! [ disk usage][1]][1 ] The reset to 25 % usage is after I kill the container and delete ` ~/stellar ` I 've also tried docker exec 6 stellar - core http - command maintenance but nothing good happens with disk , it just reports ` No work performed ` What am I missing ? [ 1 ] : https://i.stack.imgur.com/neI78.png
context_12
I 'm maintaining a network of Stellar nodes with quite a lot of traffic , for quite some time now ( almost a year ) . Everything has been quite stable so far , but last week I noticed that all the core apps started exhibiting a new behaviour with respect to disc space : huge ( ~5 G ) temporary bucket files are being created and then deleted ( in ' /stellar - core / volumes / stellar - core / buckets / tmp/ ' ) is this normal ? all my cores are performing in the same way : every ~10 minutes , they use up an additional 5Gb of disc space and then release it . I suspect this started happening now because the network now uses lower levels of buckets ( in the [ BucketList][1 ] ) . There 's no adverse effect - everything works just fine otherwise . Would love to know why this is happening and especially to know that its ' normal ' . [ 1 ] : https://github.com/stellar/stellar-core/tree/master/src/bucket
context_13
For example , I see a few here : https://coinmarketcap.com/exchanges/stellar-decentralized-exchange/ Are there more , and if so what 's the data source ? Thank you .
context_14
I would like to create a simple escrow account with two preauthorized timebound transactions , but not sure how to approach the problem . I am fairly new to Stellar , so correct me if I am wrong . I have to build two transactions with different destinations but same sequence number ( one higher than current on the acc ) . Third transaction will contain 2 signers with tx1.hash ( ) and tx2.hash ( ) with some weight e.g. 2 on both . This one will be signed and submitted . How can i send same preauth transaction in the future so it matches the signer key ? Do i have to store the original transactions as XDR ? I am using stellar - sdk , building both works , signers are also getting created . Just need to get on the right track , maybe whole approach is completely wrong .
context_15
I am getting following Exception while performing send transaction using ` java stellar sdk ` ------------------------------------------------------------------------ following is my * * send transaction * * code : import org.stellar.sdk.AssetTypeNative ; import org.stellar.sdk.KeyPair ; import org.stellar.sdk.Memo ; import org.stellar.sdk.Network ; import org.stellar.sdk.PaymentOperation ; import org.stellar.sdk.Server ; import org.stellar.sdk.Transaction ; import org.stellar.sdk.responses.AccountResponse ; import org.stellar.sdk.responses.SubmitTransactionResponse ; public class sendTransaction { public static void main(String args [ ] ) { Network.useTestNetwork ( ) ; Server server = new Server("https://horizon - testnet.stellar.org " ) ; KeyPair source = KeyPair.fromSecretSeed("SXXXXXXXXXXXXXXXXXXXXXXXXXXX " ) ; KeyPair destination = KeyPair.fromAccountId("GXXXXXXXXXXXXXXXXXXXXXXXXX " ) ; try { server.accounts().account(destination ) ; } catch(Exception e ) { System.out.println(e.getMessage ( ) ) ; return ; } AccountResponse sourceAccount ; try { sourceAccount = server.accounts().account(source ) ; } catch ( Exception e ) { System.out.println(e.getMessage ( ) ) ; return ; } Transaction transaction = new Transaction . Builder(sourceAccount ) .addOperation(new PaymentOperation . Builder(destination , new AssetTypeNative ( ) , " 10").build ( ) ) .addMemo(Memo.text("hello " ) ) .build ( ) ; // Sign the transaction to prove you are actually the person sending it . transaction.sign(source ) ; // And finally , send it off to Stellar ! try { SubmitTransactionResponse response = server.submitTransaction(transaction ) ; System.out.println("Success ! " ) ; } catch ( Exception e ) { System.out.println("Something went wrong ! " ) ; System.out.println(e.getMessage ( ) ) ; } } } I am getting exception : Exception in thread " main " java.lang.RuntimeException : TimeBounds has to be set or you must call setTimeout(TIMEOUT_INFINITE ) . at org.stellar.sdk.Transaction$Builder.build(Transaction.java:385 ) at com.apiservice.controller.sendTransaction.main(sendTransaction.java:45 ) following is my * * pom.xml * * jitpack.io https://jitpack.io org.springframework.boot spring - boot - starter - web org.springframework.boot spring - boot - starter - test test com.github.stellar java - stellar - sdk 0.2.0 org.springframework.boot spring - boot - maven - plugin
context_16
I 'm trying to submit a transaction that includes a payment operation . Sometimes the transaction succeeds . However , sometimes it fails . I believe this has something to do with the decimal being submitted . Before submitting the transaction , the decimal prints as : 11.776242893769199616 On failed transactions , if I browse the XDR response ( via Stellar laboratory ) , I see a negative amount : amount : -6.6705011 ( raw : -66705011 ) Any idea why this is a negative number ? * * Response : * * Post Payment Error : Horizon request error of type request failed with message : { " type " : " https://stellar.org/horizon-errors/transaction_failed " , " title " : " Transaction Failed " , " status " : 400 , " detail " : " The transaction failed when submitted to the stellar network . The ` extras.result_codes ` field on this response contains further details . Descriptions of each code can be found at : https://www.stellar.org/developers/learn/concepts/list-of-operations.html " , " extras " : { " envelope_xdr " : " [ XDR INFO ] " , " result_codes " : { " transaction " : " tx_failed " , " operations " : [ " op_malformed " ] } , " result_xdr " : " AAAAAAAAAGT/////AAAAAQAAAAAAAAAB/////wAAAAA= " } }
context_17
i 'm moving from testnet to pubnet but for some reason my code did n't work with pubnet . here some snippet . Of course sourceAccount , des.publicKey and sourceKeys contain what i 'm expeting . sourceAccount contain pubKey of sender des.publicKey ( ) contain pubkey of receiver and sourceKeys is a keypair from Secret . This works well on testnet but not on pubnet [ ! [ enter image description here][2]][2 ] [ ! [ enter image description here][1]][1 ] [ 1 ] : https://i.stack.imgur.com/kqIby.png [ 2 ] : https://i.stack.imgur.com/ggyGY.png
context_18
How can I create a FIAT credit from a bank account for example from a Deutsche Bank account in Stellar Thank
context_19
I have a similar question to [ this](https://stellar.stackexchange.com / questions/2353/network - maintenance - maintaining - archive ) but it remains unanswered and there is no documentation to be found when it comes to running a stable and resilient production network . # Setup I am working on a private network with 3 full validators running . There is a single top level quorum that requires 2 of the 3 validators to agree . Postgres , Horizon and Core each exist in their own Docker container . # History At the moment I have the history configured so that each validator writes and gets from its own local archive . It can also get from the archives of the other 2 cores in the quorum . This is configured as per the instructions [ here](https://github.com / stellar / stellar - core / blob / master / docs / history.md ) : > A common configuration is for each peer in a group to have a single history archive that it knows how to write to , while also knowing how to read from all archives in the group . In my stellar-core.cfg this translates to : [ HISTORY.local ] get="cp /tmp / stellar - core / history / vs/{0 } { 1 } " put="cp { 0 } /tmp / stellar - core / history / vs/{1 } " mkdir="mkdir -p /tmp / stellar - core / history / vs/{0 } " [ HISTORY.h1 ] get="curl http://otherpeer1/history / vs/{0 } -o { 1 } " [ HISTORY.h2 ] get="curl http://otherpeer2/history / vs/{0 } -o { 1 } " # Running the network I startup all of the containers ( I 'm using Swarm so I just fire up a stack with all of them in ) and the network starts running . Stellar does it 's thing , we can submit transactions , query horizon etc etc . So far so good . I then want to simulate node failures so I restart one of the core containers to see what happens . This is where the trouble begins . The container comes back up and the status is " Joining SCP " . When I check the stellar core log I see a lot of these messages : 2019 - 04 - 12T10:01:38.753 GBIVX [ Process WARNING ] process 65 exited 1 : gzip -d buckets / tmp / repair - buckets - c4d4750e00c0e5d1/bucket/45/10/c9/bucket-4510c94b7119c043c67d598386f270c9db6c76ee7f3c16967abdb523ec455353.xdr.gz 2019 - 04 - 12T10:01:38.753 GBIVX [ Work WARNING ] Reached retry limit 0 for gunzip - file buckets / tmp / repair - buckets - c4d4750e00c0e5d1/bucket/45/10/c9/bucket-4510c94b7119c043c67d598386f270c9db6c76ee7f3c16967abdb523ec455353.xdr.gz 2019 - 04 - 12T10:01:38.753 GBIVX [ Work WARNING ] Scheduling retry # 5/32 in 30 sec , for get - and - unzip - remote - file bucket/45/10/c9/bucket-4510c94b7119c043c67d598386f270c9db6c76ee7f3c16967abdb523ec455353.xdr.gz % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 314 100 314 0 0 23311 0 --:--:-- --:--:-- --:--:-- 24153 gzip : buckets / tmp / repair - buckets - c4d4750e00c0e5d1/bucket/45/10/c9/bucket-4510c94b7119c043c67d598386f270c9db6c76ee7f3c16967abdb523ec455353.xdr.gz : not in gzip format Uh Oh I notice the core is n't actually connected to the other peers either and it is n't catching up so I go ahead and issue : stellar - core http - command " connect?peer = othercore&port=11625 " This causes the status to switch to catching up and I get the countdown to the catchup point . Once the countdown reaches the catchup point though the stellar - core process stops or restarts ( I ca n't determine exactly what ) and the container restarts sending it back to the " Joining SCP " stage . At this point I believe the peer is non - recoverable :( This is no good for production or my health so I tried some other configurations : # Other configurations I 've looked at the following : - All cores read and write to a single history archive - the network starts , I can restart any core or even all cores and the network persists - great ! But having a single history archive for the entire network seems a bit risky for production and violates the advice [ here](https://www.stellar.org / developers / stellar - core / software / admin.html#configuring - to - publish - to - an - archive ) : > writing to the same archive from different nodes is not supported and will result in undefined behavior , potentially data loss . Not good ... - All cores read and write to multiple history archives . This gives the network some redundancy but again violates the advice about writing to the same archive . I left this test over night and got some interesting results : the whole network functioned but a missing bucket occurred that could n't be found in any of the archives - once that happened and I tried restarting a core the network is toast # Question So I seem to have followed all the guidance published but as soon as I restart a single node that node can never get back into the network . So how do other people do this ? How is the Stellar Public network history archives setup so that when a core goes down for maintenance it can catch up again ?
context_20
The set_options operation JSON returned by Horizon does not disambiguate between ` signer_keys ` of different types ( hash , signed txn hash , account ) . // sha256 hash " signer_key " : " XCVKVKVKVKVKVKVKVKVKVKVKVKVKVKVKVKVKVKVKVKVKVKVKVKVKV6IF " , // Ed25519 public key " signer_key " : " GAQUWIRXODT4OE3YE6L4NF3AYSR5ACEHPINM5S3J2F4XKH7FRZD4NDW2 " , // Pre - authorized txn hash " signer_key " : " TCPY4QKJLOI6XLFFU7EN6FO7Y5PEFWBP6V4SXXXSQMRW2OYE27KGH4ZP " , We know that any value that does n't start with ` G ` can not be a public key . But can we infer anything else ? XDR deserialisation disambiguates this enumeration . Can the JSON API do the same ?
context_21
The below code is the multisig transaction to submit to stellar horizon testnet . When I am submitting the transaction , I am getting the submission failed error . var alice_public_key = ' GD2HPONSMOTEJQUE2WIUBPMWZ7WPAE7EBLW2RRD2XH6CIJHKCZKAYMZ3 ' ; var alice_seed = ' SCQUFZIFKMF3CYXAYKPKBQZRF33O3YKJMQXL63PVW7O66GBQOTVKR3A5 ' ; var bob_address = ' GCY4UYX6ZC6U7TPOELP2WTVYFN26IGUNYOX5PBAE7BXL32HSTUZ2INFM ' ; first_public_addr = ' GCY4UYX6ZC6U7TPOELP2WTVYFN26IGUNYOX5PBAE7BXL32HSTUZ2INFM ' first_private_key = ' SANS6OZYMGLB7LUQVPEO7SMNO5SNKS7457IGVAG7AMZBQ2Z67TS4LL3I ' var destinationId = ' GA2C5RFPE6GCKMY3US5PAB6UZLKIGSPIUKSLRB6Q723BM2OARMDUYEJ5 ' ; var StellarSdk = require('stellar - sdk ' ) ; var server = new StellarSdk . Server('https://horizon - testnet.stellar.org ' ) ; server.loadAccount(destinationId ) .catch(StellarSdk . NotFoundError , function ( error ) { throw new Error('The destination account does not exist ! ' ) ; } ) .then(function ( ) { server.loadAccount(alice_public_key ) .then(function(account ) { // console.log(account.sequence ) var rootKeypair = StellarSdk . Keypair.fromSecret(alice_seed ) var account = new StellarSdk . Account(alice_public_key , account.sequence ) ; var transaction = new StellarSdk . TransactionBuilder(account ) .addOperation(StellarSdk . Operation.setOptions ( { signer : { ed25519PublicKey : first_public_addr , weight : 1 } } ) ) .addOperation(StellarSdk . Operation.setOptions ( { masterWeight : 1 , // set master key weight lowThreshold : 1 , medThreshold : 2 , // a payment is medium threshold highThreshold : 2 // make sure to have enough weight to add up to the high threshold ! } ) ) .build ( ) ; transaction.sign(rootKeypair ) ; var transaction = new StellarSdk . TransactionBuilder(account ) .addOperation(StellarSdk . Operation.payment ( { destination : bob_address , asset : StellarSdk . Asset.native ( ) , amount : " 10 " // 2000 XLM } ) ) .addMemo(StellarSdk . Memo.text('Test Transaction ' ) ) .build ( ) ; var secondKeypair = StellarSdk . Keypair.fromSecret(first_private_key ) ; transaction.sign(rootKeypair ) ; transaction.sign(secondKeypair ) ; server.submitTransaction(transaction ) .then(function ( transactionResult ) { console.log(transactionResult ) ; } ) .catch(function ( err ) { console.error(err ) ; } ) ; } ) ; } ) ERROR : { [ BadResponseError : Transaction submission failed . Server responded : 400 Bad Request ] name : ' BadResponseError ' , message : ' Transaction submission failed . Server responded : 400 Bad Request ' , data : { type : ' https://stellar.org/horizon-errors/transaction_failed ' , title : ' Transaction Failed ' , status : 400 , detail : ' The transaction failed when submitted to the stellar network . The ` extras.result_codes ` field on this response contains further details . Descriptions of each code can be found at : https://www.stellar.org/developers/learn/concepts/list-of-operations.html ' , extras : { envelope_xdr : ' AAAAAPR3ubJjpkTChNWRQL2Wz+zwE+QK7ajEern8JCTqFlQMAAAAZABrCz0AAAAIAAAAAAAAAAEAAAAQVGVzdCBUcmFuc2FjdGlvbgAAAAEAAAAAAAAAAQAAAACxymL+yL1Pze4i36tOuCt15BqNw6/XhAT4br3o8p0zpAAAAAAAAAAABfXhAAAAAAAAAAAC6hZUDAAAAECywXEM2F1RUKIKgP5rfnhey0Yr3vjuJdjZMGkExSb7zE2qbrYxs3hBN2Qk / ZsNy9SSbav/79vgNb3ePv5AvFcJ8p0zpAAAAEDGsDwv0p/714rrzSrdhNZ+sP0qr3wWp36VWUnbkzIlhievvJ3Tie7P6iP7iVU5Vobiwh+2mcaz5vLnSFnu3sgG ' , result_codes : [ Object ] , result_xdr : ' AAAAAAAAAAD////7AAAAAA== ' } } }
context_22
I 'm trying to run the " Creating a payment transaction " example code provided on [ https://www.stellar.org/developers/js-stellar-sdk/reference/examples.html][1 ] but the error I get is const account = await server.loadAccount(sourcePublicKey ) ; ^^^^^ SyntaxError : await is only valid in async function I am running this on Ubuntu 18.04 ( Digital Ocean ) w/ node.js v8.10 as instructed and have edited the code so that it uses the test network and kay pairs I have set up w/ the laborator ... so what do I need to do to my setup to get this example to work , or is there a workaround using promise ? I just want to do a test transfer of some lumens on the test network . [ 1 ] : https://www.stellar.org/developers/js-stellar-sdk/reference/examples.html
context_23
I have three stellar - core instances and one Horizon server . The Horizon server is connected to the first of three nodes . All goes well for a while , until I start firing many , many transactions at Horizon ( Stellar advertises being able to process 1k transactions / s , so I 'm firing 1000 individual transactions at Horizon . ) At this point , I start receiving 400-bad request errors from Horizon , and looking at the Horizon logs shows many 504 errors . When I do a " health check " on Stellar , all three nodes are synched , and no nodes say that other nodes are missing . In other words , everything looks dandy from the Stellar point of view . Do I need to run multiple Horizon instances to be able to fire 1k transactions at the thing ? I 'm running the three nodes and Horizon on a machine with 20 cores and SSD . Stellar and Postgresql are all writing to the SSD .
context_24
I build a docker image from the [ v0.17.0 tag of docker - stellar - core - horizon][1]. Then , I created a local standalone network from that image : docker run --rm -d -p " 8000:8000 " -p " 11626:11626 " \ -p " 5433:5432 " --name stellar synesso / stellar : v0.17.0 --standalone I create and fund a few accounts , and make a non - native payment . $ curl -s " http://localhost:8000/accounts / GBZX ... MADI / payments " | \ jq ' ._embedded.records[3 ] | del(._links ) ' { " i d " : " 25769807878 " , " paging_token " : " 25769807878 " , " transaction_successful " : true , " source_account " : " GBZXN7PIRZGNMHGA7MUUUF4GWPY5AYPV6LY4UV2GL6VJGIQRXFDNMADI " , " type " : " path_payment " , " type_i " : 2 , " created_at " : " 2019 - 02 - 27T04:40:11Z " , " transaction_hash " : " c5e29c7d19c8af4fa932e6bd3214397a6f20041bc0234dacaac66bf155c02ae9 " , " asset_type " : " credit_alphanum12 " , " asset_code " : " Chinchilla " , " asset_issuer " : " GAAYHQF2PNZ3H6LE5AX3UJSGUR2DQXHHGXYMHF32TDYF2FFPTTOFI3PA " , " from " : " GBZXN7PIRZGNMHGA7MUUUF4GWPY5AYPV6LY4UV2GL6VJGIQRXFDNMADI " , " to " : " GCYTIVTAEF6AJOZG5TVXE7OZE7FLUXJUJSYAZ3IR2YH4MNINDJJX4DXF " , " amount " : " 0.0000001 " , " path " : [ ] , " source_amount " : " 0.0000001 " , " source_max " : " 0.0000001 " , " source_asset_type " : " credit_alphanum12 " , " source_asset_code " : " Chinchilla " , " source_asset_issuer " : " GBZXN7PIRZGNMHGA7MUUUF4GWPY5AYPV6LY4UV2GL6VJGIQRXFDNMADI " } In the Horizon DB , the ` history_assets ` table contains the asset : " id","asset_type","asset_code","asset_issuer " 1,"credit_alphanum12","Chinchilla","GAAYHQF2PNZ3H6LE5AX3UJSGUR2DQXHHGXYMHF32TDYF2FFPTTOFI3PA " But the ` asset_stats ` table contains nothing . Because [ the query to load assets joins these two tables][2 ] ( I might be looking in the wrong place ) , the call to ` /assets ` returns a 404 . curl " http://localhost:8000/assets " { " type " : " https://stellar.org/horizon-errors/not_found " , " title " : " Resource Missing " , " status " : 404 , " detail " : " The resource at the url requested was not found . This is usually occurs for one of two reasons : The url requested is not valid , or no data in our database could be found with the parameters provided . " } Most likely , I am missing some important config or setup step . But I ca n't figure out what it is . [ 1 ] : https://github.com/stellar/docker-stellar-core-horizon/tree/28616e09b2464b802a2e478b2504cd38b211de29 [ 2 ] : https://github.com/stellar/go/blob/3d2c1defe73dbfed00146ebe0e8d7e07ce4bb1b6/services/horizon/internal/db2/assets/asset_stat.go#L60-L72
context_25
I am using the JavascriptSDK . I just set up a Custom Asset with an issuing and distribution account ( DISTRO1 ) . I have changed trust and set a limit for the Distribution account using my new BPTOKENS well above 0 . I did not pay the Distribution account any of my new BPTOKENS . It has a zero balance for my BPTOKENS . I then set up a secondary Distribution account ( DISTRO2 ) . I changed the trust and set a limit for the DISTRO2 account well above 0 . This time I paid the account 5000000 of my BPTOKENS . I attempted to set up a payment from DISTRO2 account to DISTRO1 account with the zero balance . The transaction fails . Here is the code : ` ` ` ` const fs = require('fs ' ) ; const StellarSdk = require('stellar - sdk ' ) ; const server = new StellarSdk . Server('https://horizon - testnet.stellar.org ' ) const source = StellarSdk . Keypair.fromSecret('SDDJHVW7TYJMFQ5QKPWQJY2Q5UWJLSC6Y77TVEFCZOHLJEMJKLHUAXQ5 ' ) //////////////// // Keys for accounts to issue and receive the new asset var issuingKeys = StellarSdk . Keypair .fromSecret('SCWMAYLNIZULNERXQI6DFJ7N4PFCGPX5XY2K33S5OXGHPXWCYA4B2JUT ' ) ; // DISTRO2/BASE Account - limit = 5000000 var PayerKeys = StellarSdk . Keypair .fromSecret('SDDJHVW7TYJMFQ5QKPWQJY2Q5UWJLSC6Y77TVEFCZOHLJEMJKLHUAXQ5 ' ) ; // DISTRO1/Base Account - limit = 1000000 var ReceiverKeys = StellarSdk . Keypair .fromSecret('SBKKT7JUQFBWWX5AGA22QCWUXWZRY7PLWFFLKBIR3RE236JDPLHNMUY7 ' ) ; // Create an object to represent the new asset var BPTOKEN = new StellarSdk . Asset('BPTOKEN ' , issuingKeys.publicKey ( ) ) ; var public = source.publicKey ( ) ; ////////////// StellarSdk . Network.useTestNetwork ( ) server.accounts ( ) .accountId(source.publicKey ( ) ) .call ( ) .then ( ( { sequence } ) = > { const account = new StellarSdk . Account(source.publicKey ( ) , sequence ) const transaction = new StellarSdk . TransactionBuilder(account , { fee : StellarSdk . BASE_FEE } ) .addOperation(StellarSdk . Operation.payment ( { destination : " GADVTLHV5T7J2EZM5MCDBHZMK67Y3H573PCKAOGKE5Y54EEWPFZYWZXB " , asset : ' BPTOKEN ' , // asset : StellarSdk . Asset.native ( ) , amount : " 1.50 " // 1000.50 XLM } ) ) .setTimeout(30 ) ////////////////////////////////////////////////////////// .build ( ) transaction.sign(StellarSdk.Keypair.fromSecret(source.secret ( ) ) ) return server.submitTransaction(transaction ) } ) .catch(function(error ) { console.log('Error ! ' , error ) ; } ) ; ` ` ` ` Question 1 : The documentation on the Stellar website for Issuing Custom Assets payments ( https://www.stellar.org/developers/guides/issuing-assets.html ) , goes through the entire process of changing trust , limits and other steps that seem like initial setup steps vs. a simple payment transaction . Does a simple payment transaction between two normal or distro accounts need the entire change of trust , limit set and other steps just to exchange assets ? Question 2 : If the receiving account has its trust changed to accept BPTOKENS but has a zero balance of said tokens , will the transaction fail if an account with BPTOKENS attempts to send BPTOKENS to the receiving account ? Question 3 : Are there other areas that I missed or programming errors I made above ?
context_26
I have successfully setup Bifrost and Geth server on AWS EC2 c5.xlarge running Ubuntu . I am sending ETH Ropsten to the generated address but bifrost is not listening to any incoming transactions . I 've tried figuring out the problem but can not find solution to it . P.S I 've read through ' Bifrost is not detecting incoming transaction ' thread but it did not help . * * Things that working perfectly : * * 1 , Bifrost shows the address that s generated ( I am referring to step1 Waiting for a transaction ... ) 2 , Generated address does receive the Ropsten ETH * * Things NOT working : * * 1 , Bifrost does not detect any transaction . 2 , Bifrost does not exchange ' HUG ' with ' ETH ' 3 , Bifrost does not present with the Stellar public and secret key * * Bifrost Configuration ( bifrost.cfg ) : * * port = 8000 using_proxy = false access_control_allow_origin_header = " * " [ ethereum ] master_public_key = " [ BIP32 Extended Public Key ] " rpc_server = " localhost:8545 " network_id = " 3 " minimum_value_eth = " 0.00001 " token_price = " 0.001 " [ stellar ] issuer_public_key = " [ ISSUER PUCLIC KEY ] " distribution_public_key = " [ DISTRIBUTION PUBLIC KEY ] " signer_secret_key = " [ ISSUER SECRET KEY ] " token_asset_code = " HUG " needs_authorize = true horizon = " https://horizon-testnet.stellar.org " network_passphrase = " Test SDF Network ; September 2015 " starting_balance = " 4 " [ database ] type="postgres " dsn="postgres://[POSTGRES ADDRESS ] " * * Bifrost Configuration ( bifrost.cfg ) : * * var params = { network : ' test ' , horizonURL : ' https://horizon-testnet.stellar.org ' , bifrostURL : ' http://18.216.71.189:8000 ' , recoveryPublicKey : ' [ DISTRIBUTION PUBLIC KEY ] ' } ; var session = new Bifrost . Session(params ) ; var keypair ; session.startEthereum(onEvent).then(params = > { setStatus("Waiting for a transaction ... " , 10 ) ; document.getElementById("address").innerText = params.address ; keypair = params.keypair ; } ) .catch(err = > { setStatus("Error " , 0 ) ; console.error(err ) ; } ) ; function onEvent(event , data ) { if ( event = = Bifrost . TransactionReceivedEvent ) { setStatus("Transaction received , creating account ... " , 20 ) } else if ( event = = Bifrost . AccountCreatedEvent ) { setStatus("Account created , configuring account ... " , 40 ) } else if ( event = = Bifrost . AccountConfiguredEvent ) { setStatus("Account configured , waiting for tokens ... " , 60 ) } else if ( event = = Bifrost . ExchangedEvent ) { setStatus("Congrats ! TOKE purchased . Your Stellar keys : Public key : " + keypair.publicKey()+"\nSecret key : " + keypair.secret()+" " , 100 ) ; } else if ( event = = Bifrost . ExchangedTimelockedEvent ) { setStatus("Congrats ! TOKE purchased but will be locked . Your Stellar keys : Public key : " + keypair.publicKey()+"\nSecret key : " + keypair.secret()+"\nUnlock transaction : "+data.transaction+" " , 100 ) ; } else if ( event = = Bifrost . ErrorEvent ) { setStatus("Error ! " , 0 ) ; console.error(data ) ; } } function setStatus(text , progress ) { var progressbar = document.getElementById("progressbar " ) progressbar.style.width = progress+"% " ; if ( progress = = 100 ) { progressbar.className = " progress - bar progress - bar - success " ; } document.getElementById("status").innerHTML = text ; }
context_27
Please help to find out a possible solutions to the following problem : For a dedicated " Minimal Balance " service one would need to know a number of account entries among which there are Offers ( ` ( 2 + # of entries ) × base reserve ` ) . How is it possible to get a full list of Offers , given that some offer - effecting - transactions are " in - flight " and not yet visible via the Horizon API . Such a " Minimal Balance " service has : 1 . All transactions for an account , where some of them might not be submitted to the network yet ( with unique ids attached to each transaction as a Memo ) 2 . Access to the Horizon API , including a list of open orders . How is it possible to correlate transaction ids from 1 ) with offers from 2 ) in order not to count them twice ?
context_28
I have a private network of Stellar core nodes running on docker swarm with the DB and ledger data persisted correctly . When I remove or update the service for maintenance / upgrade of the docker image and then restart it I keep getting : ` ` ` 2019 - 03 - 04T04:43:12.761 GAWSB [ default INFO ] Loading last known ledger 2019 - 03 - 04T04:43:12.763 GAWSB [ Ledger WARNING ] Some buckets are missing in ' buckets ' . 2019 - 03 - 04T04:43:12.763 GAWSB [ Ledger WARNING ] Attempting to recover from the history store . 2019 - 03 - 04T04:43:12.763 GAWSB [ History INFO ] Starting RepairMissingBucketsWork 2019 - 03 - 04T04:43:12.818 GAWSB [ Process WARNING ] process 118 exited 1 : cp /tmp / stellar - core / history / vs / bucket/61/83/ea / bucket-6183ea169e9b74480c793417fa42694cc961df5e47d4ca9e9a89164ceadccfa8.xdr.gz buckets / tmp / repair - buckets-7460063929a1b56d / bucket/61/83/ea / bucket-6183ea169e9b74480c793417fa42694cc961df5e47d4ca9e9a89164ceadccfa8.xdr.gz.tmp 2019 - 03 - 04T04:43:12.819 GAWSB [ Work WARNING ] Reached retry limit 0 for get - remote - file bucket/61/83/ea / bucket-6183ea169e9b74480c793417fa42694cc961df5e47d4ca9e9a89164ceadccfa8.xdr.gz 2019 - 03 - 04T04:43:12.819 GAWSB [ Work WARNING ] Scheduling retry # 1/32 in 1 sec , for get - and - unzip - remote - file bucket/61/83/ea / bucket-6183ea169e9b74480c793417fa42694cc961df5e47d4ca9e9a89164ceadccfa8.xdr.gz 2019 - 03 - 04T04:43:13.829 GAWSB [ Process WARNING ] process 122 exited 1 : cp /tmp / stellar - core / history / vs / bucket/61/83/ea / bucket-6183ea169e9b74480c793417fa42694cc961df5e47d4ca9e9a89164ceadccfa8.xdr.gz buckets / tmp / repair - buckets-7460063929a1b56d / bucket/61/83/ea / bucket-6183ea169e9b74480c793417fa42694cc961df5e47d4ca9e9a89164ceadccfa8.xdr.gz.tmp 2019 - 03 - 04T04:43:13.829 GAWSB [ Work WARNING ] Reached retry limit 0 for get - remote - file bucket/61/83/ea / bucket-6183ea169e9b74480c793417fa42694cc961df5e47d4ca9e9a89164ceadccfa8.xdr.gz ` ` ` These buckets do n't exist and I presume it is because the container was killed in the middle of a bucket write . There are issues [ here][1 ] and [ here][2 ] which seem to suggest the only fix for this is to run newdb which is not going to work in production ... Is there a clean way of stopping Stellar Core on a container shutdown so that this issue is avoided ? [ 1 ] : https://github.com/stellar/stellar-core/issues/1395 [ 2 ] : https://github.com/stellar/stellar-core/issues/1518
context_29
I 'm playing with the Stellar java SDK and I 'm trying to send to send some Lumens from a fully funded account to an other account on the testnet . Once the transaction is complete I get back a response with all the field being null expect the " extra " field which has all its field to null except the " envelopeXdr " which is equal to > AAAAAPQl3 > LgGzmS98AbidI2Pf5iG6/Y5JjjA8qGFcCFlAnlAAAAZABnH7IAAAABAAAAAAAAAAEAAAAAAAAAAQAAAAAAAAABAAAAACzKntDKAeTf87abUrrUsJAEPQPEfc7EEcKXsrnaAG2RAAAAAAAAAAAGjneAAAAAAAAAAAGFlAnlAAAAQDYKf5r5BqDRGHm > 9K0zTfPQ0nSKMZxwKblhnW / GHV1gbXcm4NOfmBNBowjW9xHtY2p9 > 4MBPLRJUjYVeR2t3Qw= How can I know what went wrong ?
context_30
I am having some issues with an application I 'm trying to build . I am new to JS . What I am trying to accomplish : Sending 2 variables ( wallet , amount ) to the function transferLumens. These 2 variables come from another function where amount and wallet are stored in a list . This list is looped through to get wallet and amount thrown to transferLumens function . I checked the passing variables and they are valid . The payment 's go through sometimes but it fails more than anything . The error code is : Error : Request failed with status code 400 The guide for Stellar Developers , [ sequenceNumbers][1 ] was some what helpful . How do I get the currenct sequence number before a transaction submit ? Any guidance is much appreciated . function transferLumens(wallet , amount ) { var StellarSdk = require('stellar - sdk ' ) ; var server = new StellarSdk . Server('https://horizon - testnet.stellar.org ' ) ; StellarSdk . Network.useTestNetwork ( ) ; var sourceKeys = StellarSdk . Keypair .fromSecret('SECRETKEY ' ) ; //var testamount = " 2.4 " ; server.loadAccount(wallet ) .catch(StellarSdk . NotFoundError , function ( error ) { throw new Error('The destination account does not exist ! ' ) ; } ) .then(function ( ) { return server.loadAccount(sourceKeys.publicKey ( ) ) ; } ) .then(function ( sourceAccount ) { transaction = new StellarSdk . TransactionBuilder(sourceAccount ) .addOperation(StellarSdk . Operation.payment ( { destination : wallet , asset : StellarSdk . Asset.native ( ) , amount : amount } ) ) .addMemo(StellarSdk . Memo.text("TEST PAYMENT " ) ) .build ( ) ; transaction.sign(sourceKeys ) ; // console.log(transaction.toEnvelope().toXDR('base64 ' ) ) ; return server.submitTransaction(transaction ) ; } ) .then(function ( result ) { console.log('Success ! Results : ' , result ) ; } ) .catch(function ( error ) { console.error('Something went wrong ! ' , error ) ; } ) ; } [ 1 ] : https://www.stellar.org/developers/js-stellar-base/reference/building-transactions.html#sequence-numbers
context_31
I am trying to create a Stellar Standalone network using my own network passphrase . I m able to set up stellar - core and horizon in my private network and it is working fine , I m able to get the root account balance through my ` nodejs ` code . But I m getting an ` tx_bad_seq ` when I m trying to create and fund an account . Here is my code let kp = Stellar . Keypair.random ( ) ; let destinationId = kp.publicKey ( ) ; let sourceId = " GAZDSYG5HWIKHR76LUP5SGQBPYXYCFUIR45Z2WMOGITGY7JILDC6JAAF " ; let sourceKeys = Stellar . Keypair.fromSecret("SDJ5AQWLIAYT22TCYSKOQALI3SNUMPAR63SEL73ASALDP6PYDN54FARM " ) ; try { var sourceAccount = await server.loadAccount(sourceId ) ; var transaction = new Stellar . TransactionBuilder(sourceAccount ) .addOperation ( Stellar . Operation.createAccount ( { destination : destinationId , startingBalance : " 100 " , asset : Stellar . Asset.native ( ) } ) ) .setTimeout(30 ) .addMemo(Stellar . Memo.text('Creating account ' ) ) .build ( ) ; transaction.sign(sourceKeys ) ; var result = await server.submitTransaction(transaction ) ; console.log(result ) ; res.send(result ) ; } catch(error ) { console.log(error ) ; res.send({'Msg ' : ' error ' } ) ; return ; } res.send({'Msg ' : ' Success ' } ) ; Here is the error what I am getting Error : Request failed with status code 400 app.js:221 config : Object { adapter : , transformRequest : Object , transformResponse : Object , … } message:"Request failed with status code 400 " request : ClientRequest { domain : null , _ events : Object , _ eventsCount : 6 , … } response : Object { status : 400 , statusText : " Bad Request " , headers : Object , … } config : Object { adapter : , transformRequest : Object , transformResponse : Object , … } data : Object { type : " https://stellar.org/horizon-errors/transaction_fai … " , title : " Transaction Failed " , status : 400 , … } detail:"The transaction failed when submitted to the stellar network . The ` extras.result_codes ` field on this response contains further details . Descriptions of each code can be found at : https://www.stellar.org/developers/learn/concepts/list-of-operations.html " extras : Object { envelope_xdr : " AAAAADI5YN09kKPH / l0f2RoBfi+BFoiPO51ZjjImbH0oWMXkAA … " , result_codes : Object , result_xdr : " AAAAAAAAAGT////7AAAAAA== " } envelope_xdr:"AAAAADI5YN09kKPH / l0f2RoBfi+BFoiPO51ZjjImbH0oWMXkAAAAZAAAAAAAAAABAAAAAQAAAAAAAAAAAAAAAFx / mhAAAAABAAAAEENyZWF0aW5nIGFjY291bnQAAAABAAAAAAAAAAAAAAAAXsWc73Eg9YwKLtLg5HwSSx8mVxMgQzcBZwByeMRBIicAAAAAO5rKAAAAAAAAAAABKFjF5AAAAEBQMa0S5PH5zt9AMXqkX3b7cNuRo8KgybYc+1hUNaJNuU7T6G0FZyNq0/AalONixMRMyyUuMBQCF9SZSEVRDxkM " result_codes : Object { transaction : " tx_bad_seq " } transaction:"tx_bad_seq " _ _ proto__:Object { constructor : , _ _ defineGetter _ _ : , _ _ defineSetter _ _ : , … } result_xdr:"AAAAAAAAAGT////7AAAAAA== " _ _ proto__:Object { constructor : , _ _ defineGetter _ _ : , _ _ defineSetter _ _ : , … } instance:"linux/0t8XxOn9u2 - 000004 " status:400 title:"Transaction Failed " type:"https://stellar.org / horizon - errors / transaction_failed " _ _ proto__:Object { constructor : , _ _ defineGetter _ _ : , _ _ defineSetter _ _ : , … } headers : Object { content - type : " application / problem+json ; charset = utf-8 " , x - ratelimit - limit : " 3600 " , x - ratelimit - remaining : " 3596 " , … } request : ClientRequest { domain : null , _ events : Object , _ eventsCount : 6 , … } status:400 statusText:"Bad Request " _ _ proto__:Object { constructor : , _ _ defineGetter _ _ : , _ _ defineSetter _ _ : , … } stack:"Error : Request failed with status code 400\n at createError ( /home / vijin / workspace / stellar / node_modules / axios / lib / core / createError.js:16:15)\n at settle ( /home / vijin / workspace / stellar / node_modules / axios / lib / core / settle.js:18:12)\n at IncomingMessage.handleStreamEnd ( /home / vijin / workspace / stellar / node_modules / axios / lib / adapters / http.js:201:11)\n at emitNone ( events.js:111:20)\n at IncomingMessage.emit ( events.js:208:7)\n at endReadableNT ( _ stream_readable.js:1064:12)\n at _ combinedTickCallback ( internal / process / next_tick.js:139:11)\n at process._tickCallback ( internal / process / next_tick.js:181:9 ) " _ _ proto__:Object { constructor : , name : " Error " , message : " " , … } Please let me know why I am facing this issue . I followed this link to set up stellar - core and horizon servers [ https://labs.imaginea.com/post/stellar-bc-wallet/][1 ] [ 1 ] : https://labs.imaginea.com/post/stellar-bc-wallet/
context_32
At the beggining of Stellar , we used a classical Account / Password authentication to access our Stellar balance . Why has this been changed and what are the Pros to now use a private key instead of Account / Password ?
context_33
Just recently [ a major hack][1 ] lifted lumens worth $ 400k from the digital wallet provider BlackWallet . What are the most important security considerations when picking a Stellar wallet ? How can I verify and keep on top of security issues regarding the wallet I use ? [ 1]:https://www.coindesk.com/400k - hacker - makes - off - with - stellar - lumens - in - blackwallet - theft/
context_34
I build a docker image from the [ v0.17.0 tag of docker - stellar - core - horizon][1]. Then , I created a local standalone network from that image : docker run --rm -d -p " 8000:8000 " -p " 11626:11626 " \ -p " 5433:5432 " --name stellar synesso / stellar : v0.17.0 --standalone I create and fund a few accounts , and make a non - native payment . $ curl -s " http://localhost:8000/accounts / GBZX ... MADI / payments " | \ jq ' ._embedded.records[3 ] | del(._links ) ' { " i d " : " 25769807878 " , " paging_token " : " 25769807878 " , " transaction_successful " : true , " source_account " : " GBZXN7PIRZGNMHGA7MUUUF4GWPY5AYPV6LY4UV2GL6VJGIQRXFDNMADI " , " type " : " path_payment " , " type_i " : 2 , " created_at " : " 2019 - 02 - 27T04:40:11Z " , " transaction_hash " : " c5e29c7d19c8af4fa932e6bd3214397a6f20041bc0234dacaac66bf155c02ae9 " , " asset_type " : " credit_alphanum12 " , " asset_code " : " Chinchilla " , " asset_issuer " : " GAAYHQF2PNZ3H6LE5AX3UJSGUR2DQXHHGXYMHF32TDYF2FFPTTOFI3PA " , " from " : " GBZXN7PIRZGNMHGA7MUUUF4GWPY5AYPV6LY4UV2GL6VJGIQRXFDNMADI " , " to " : " GCYTIVTAEF6AJOZG5TVXE7OZE7FLUXJUJSYAZ3IR2YH4MNINDJJX4DXF " , " amount " : " 0.0000001 " , " path " : [ ] , " source_amount " : " 0.0000001 " , " source_max " : " 0.0000001 " , " source_asset_type " : " credit_alphanum12 " , " source_asset_code " : " Chinchilla " , " source_asset_issuer " : " GBZXN7PIRZGNMHGA7MUUUF4GWPY5AYPV6LY4UV2GL6VJGIQRXFDNMADI " } In the Horizon DB , the ` history_assets ` table contains the asset : " id","asset_type","asset_code","asset_issuer " 1,"credit_alphanum12","Chinchilla","GAAYHQF2PNZ3H6LE5AX3UJSGUR2DQXHHGXYMHF32TDYF2FFPTTOFI3PA " But the ` asset_stats ` table contains nothing . Because [ the query to load assets joins these two tables][2 ] ( I might be looking in the wrong place ) , the call to ` /assets ` returns a 404 . curl " http://localhost:8000/assets " { " type " : " https://stellar.org/horizon-errors/not_found " , " title " : " Resource Missing " , " status " : 404 , " detail " : " The resource at the url requested was not found . This is usually occurs for one of two reasons : The url requested is not valid , or no data in our database could be found with the parameters provided . " } Most likely , I am missing some important config or setup step . But I ca n't figure out what it is . [ 1 ] : https://github.com/stellar/docker-stellar-core-horizon/tree/28616e09b2464b802a2e478b2504cd38b211de29 [ 2 ] : https://github.com/stellar/go/blob/3d2c1defe73dbfed00146ebe0e8d7e07ce4bb1b6/services/horizon/internal/db2/assets/asset_stat.go#L60-L72
context_35
Settings : Private network , 4 nodes running good and validating each other . Now I try to add a 5th node for testing , upon encountering an [ Horizon ingestion issue][1]. --- > this is how I made the node " 5a " ( there is also a " 5b " ) Step 1 , made a copy from the one of the 4 existing nodes Step 2 , clear all DB ; IP changed ; NODE_SEED changed ; stellar-core.cfg set to have 5 validators ( 4 existing + itself ) and CATCHUP_COMPLETE = true Step 3 , reboot and start this 5th node After a while , the following appeared at the Horizon log : Looks strange that time="2018 - 03 - 15T11:07:29 + 08:00 " level = warning msg="ingest : waiting for stellar - core sync " pid=1593 time="2018 - 03 - 15T11:07:30 + 08:00 " level = warning msg="ingest : waiting for stellar - core sync " pid=1593 time="2018 - 03 - 15T11:07:31 + 08:00 " level = warning msg="ingest : waiting for stellar - core sync " pid=1593 time="2018 - 03 - 15T11:07:32 + 08:00 " level = info msg="history db is empty , establishing base at ledger 2 " pid=1593 time="2018 - 03 - 15T11:07:36 + 08:00 " level = info msg="ingest : already in progress " pid=1593 time="2018 - 03 - 15T11:07:38 + 08:00 " level = info msg="ingest : already in progress " pid=1593 time="2018 - 03 - 15T11:07:40 + 08:00 " level = info msg="ingest : already in progress " pid=1593 As it looked strange that Horizon counted from * * ledger 2 * * , I repeated the above steps except the IP was different . At this node " 5b " ( also only 5 validators ) , Horizon started its base from * * ledger 41**. time="2018 - 03 - 15T12:01:38 + 08:00 " level = warning msg="ingest : waiting for stellar - core sync " pid=1592 time="2018 - 03 - 15T12:01:39 + 08:00 " level = warning msg="ingest : waiting for stellar - core sync " pid=1592 time="2018 - 03 - 15T12:01:40 + 08:00 " level = warning msg="ingest : waiting for stellar - core sync " pid=1592 time="2018 - 03 - 15T12:01:41 + 08:00 " level = info msg="history db is empty , establishing base at ledger 41 " pid=1592 time="2018 - 03 - 15T12:01:45 + 08:00 " level = info msg="ingest : already in progress " pid=1592 time="2018 - 03 - 15T12:01:47 + 08:00 " level = info msg="ingest : already in progress " pid=1592 time="2018 - 03 - 15T12:01:49 + 08:00 " level = info msg="ingest : already in progress " pid=1592 Any idea what went wrong .. ? [ 1 ] : https://github.com/stellar/go/issues/335
context_36
I 'm trying to send an asset on testnet and I 'm getting this error : ' op_invalid_limit ' . What could this mean ? I 'm adding some random text here because my question appears to be too short . there , that 's enough .
context_37
I am seeing plenty of these warnings while my node is catching up with CATCHUP_COMPLETE . 2018 - 10 - 27T09:32:08.197 GADLA [ Herder INFO ] Quorum information for 20686064 : { " agree":18,"disagree":0,"fail_at":3,"hash":"0c093d","missing":6,"phase":"EXTERNALIZE","validated":false } 2018 - 10 - 27T09:32:08.197 GADLA [ Herder WARNING ] Ledger 20686066 ( 8b2723 ) closed and could NOT be fully validated by validator 2018 - 10 - 27T09:32:08.216 GADLA [ Ledger INFO ] Got consensus : [ seq=20686066 , prev=15a7dd , tx_count=10 , sv : [ txH : 8b2723 , ct : 1540632097 , upgrades : [ ] ] ] Why is my node not validating ? I guess it 's because I am still catching up and it did not get to the most recent ledgers yet . Will validation begin once I reach synced state ? It will still take a few days to catch up and I am curious to find out if there 's a different problem in the meanwhile . Dug through the code . Failed to find the piece that sets validated : true / false in the quorum information JSON . Hope someone more experienced can chime in here .
context_38
I am trying to send a data hashed using SCryptUtil provided by com.lambdaworks.crypto.SCryptUtil , convert it into array of bytes and send it to the stellar network using ManageDataOperation . Builder My code is as follows String hash = SCryptUtil.scrypt("HashData " , 16384 , 8 , 1 ) ; System.out.println(hash ) ; String hashedData = hash ; byte [ ] hashedDataByte = hashedPass.getBytes ( ) ; ManageDataOperation . Builder updateHashedData = new ManageDataOperation . Builder("Hashed Data " , hashedDataByte ) ; Transaction tt = new Transaction . Builder(sourceAccount ) .addOperation(updateHashedData.build ( ) ) .addMemo(Memo.text("Test Transaction")).setTimeout(1000).build ( ) ; The data is not sent to the server . But when I use a simple string for the " hash " variable like " Password " , it gets sent . What am I missing here ? Is there a way to send long hashcodes to stellar account in this way ?
context_39
I build a docker image from the [ v0.17.0 tag of docker - stellar - core - horizon][1]. Then , I created a local standalone network from that image : docker run --rm -d -p " 8000:8000 " -p " 11626:11626 " \ -p " 5433:5432 " --name stellar synesso / stellar : v0.17.0 --standalone I create and fund a few accounts , and make a non - native payment . $ curl -s " http://localhost:8000/accounts / GBZX ... MADI / payments " | \ jq ' ._embedded.records[3 ] | del(._links ) ' { " i d " : " 25769807878 " , " paging_token " : " 25769807878 " , " transaction_successful " : true , " source_account " : " GBZXN7PIRZGNMHGA7MUUUF4GWPY5AYPV6LY4UV2GL6VJGIQRXFDNMADI " , " type " : " path_payment " , " type_i " : 2 , " created_at " : " 2019 - 02 - 27T04:40:11Z " , " transaction_hash " : " c5e29c7d19c8af4fa932e6bd3214397a6f20041bc0234dacaac66bf155c02ae9 " , " asset_type " : " credit_alphanum12 " , " asset_code " : " Chinchilla " , " asset_issuer " : " GAAYHQF2PNZ3H6LE5AX3UJSGUR2DQXHHGXYMHF32TDYF2FFPTTOFI3PA " , " from " : " GBZXN7PIRZGNMHGA7MUUUF4GWPY5AYPV6LY4UV2GL6VJGIQRXFDNMADI " , " to " : " GCYTIVTAEF6AJOZG5TVXE7OZE7FLUXJUJSYAZ3IR2YH4MNINDJJX4DXF " , " amount " : " 0.0000001 " , " path " : [ ] , " source_amount " : " 0.0000001 " , " source_max " : " 0.0000001 " , " source_asset_type " : " credit_alphanum12 " , " source_asset_code " : " Chinchilla " , " source_asset_issuer " : " GBZXN7PIRZGNMHGA7MUUUF4GWPY5AYPV6LY4UV2GL6VJGIQRXFDNMADI " } In the Horizon DB , the ` history_assets ` table contains the asset : " id","asset_type","asset_code","asset_issuer " 1,"credit_alphanum12","Chinchilla","GAAYHQF2PNZ3H6LE5AX3UJSGUR2DQXHHGXYMHF32TDYF2FFPTTOFI3PA " But the ` asset_stats ` table contains nothing . Because [ the query to load assets joins these two tables][2 ] ( I might be looking in the wrong place ) , the call to ` /assets ` returns a 404 . curl " http://localhost:8000/assets " { " type " : " https://stellar.org/horizon-errors/not_found " , " title " : " Resource Missing " , " status " : 404 , " detail " : " The resource at the url requested was not found . This is usually occurs for one of two reasons : The url requested is not valid , or no data in our database could be found with the parameters provided . " } Most likely , I am missing some important config or setup step . But I ca n't figure out what it is . [ 1 ] : https://github.com/stellar/docker-stellar-core-horizon/tree/28616e09b2464b802a2e478b2504cd38b211de29 [ 2 ] : https://github.com/stellar/go/blob/3d2c1defe73dbfed00146ebe0e8d7e07ce4bb1b6/services/horizon/internal/db2/assets/asset_stat.go#L60-L72
context_40
I need to transfer XLM to my own ( self created ) Account . I have the account generated by Stellar Account Viewer ( https://www.stellar.org/account-viewer/#!/ ) but it seems that account is currently inactive , so there is no way to transfer any XLM to fund it . Do you know the Exchange which allows to transfer XLM to inactive account generated by Stellar Account Viewer ?
context_41
I build a docker image from the [ v0.17.0 tag of docker - stellar - core - horizon][1]. Then , I created a local standalone network from that image : docker run --rm -d -p " 8000:8000 " -p " 11626:11626 " \ -p " 5433:5432 " --name stellar synesso / stellar : v0.17.0 --standalone I create and fund a few accounts , and make a non - native payment . $ curl -s " http://localhost:8000/accounts / GBZX ... MADI / payments " | \ jq ' ._embedded.records[3 ] | del(._links ) ' { " i d " : " 25769807878 " , " paging_token " : " 25769807878 " , " transaction_successful " : true , " source_account " : " GBZXN7PIRZGNMHGA7MUUUF4GWPY5AYPV6LY4UV2GL6VJGIQRXFDNMADI " , " type " : " path_payment " , " type_i " : 2 , " created_at " : " 2019 - 02 - 27T04:40:11Z " , " transaction_hash " : " c5e29c7d19c8af4fa932e6bd3214397a6f20041bc0234dacaac66bf155c02ae9 " , " asset_type " : " credit_alphanum12 " , " asset_code " : " Chinchilla " , " asset_issuer " : " GAAYHQF2PNZ3H6LE5AX3UJSGUR2DQXHHGXYMHF32TDYF2FFPTTOFI3PA " , " from " : " GBZXN7PIRZGNMHGA7MUUUF4GWPY5AYPV6LY4UV2GL6VJGIQRXFDNMADI " , " to " : " GCYTIVTAEF6AJOZG5TVXE7OZE7FLUXJUJSYAZ3IR2YH4MNINDJJX4DXF " , " amount " : " 0.0000001 " , " path " : [ ] , " source_amount " : " 0.0000001 " , " source_max " : " 0.0000001 " , " source_asset_type " : " credit_alphanum12 " , " source_asset_code " : " Chinchilla " , " source_asset_issuer " : " GBZXN7PIRZGNMHGA7MUUUF4GWPY5AYPV6LY4UV2GL6VJGIQRXFDNMADI " } In the Horizon DB , the ` history_assets ` table contains the asset : " id","asset_type","asset_code","asset_issuer " 1,"credit_alphanum12","Chinchilla","GAAYHQF2PNZ3H6LE5AX3UJSGUR2DQXHHGXYMHF32TDYF2FFPTTOFI3PA " But the ` asset_stats ` table contains nothing . Because [ the query to load assets joins these two tables][2 ] ( I might be looking in the wrong place ) , the call to ` /assets ` returns a 404 . curl " http://localhost:8000/assets " { " type " : " https://stellar.org/horizon-errors/not_found " , " title " : " Resource Missing " , " status " : 404 , " detail " : " The resource at the url requested was not found . This is usually occurs for one of two reasons : The url requested is not valid , or no data in our database could be found with the parameters provided . " } Most likely , I am missing some important config or setup step . But I ca n't figure out what it is . [ 1 ] : https://github.com/stellar/docker-stellar-core-horizon/tree/28616e09b2464b802a2e478b2504cd38b211de29 [ 2 ] : https://github.com/stellar/go/blob/3d2c1defe73dbfed00146ebe0e8d7e07ce4bb1b6/services/horizon/internal/db2/assets/asset_stat.go#L60-L72
context_42
How to get a transaction record within a day . I am new to stellar so how do I do this in java ? Is there any way ?
context_43
I have a private network of Stellar core nodes running on docker swarm with the DB and ledger data persisted correctly . When I remove or update the service for maintenance / upgrade of the docker image and then restart it I keep getting : ` ` ` 2019 - 03 - 04T04:43:12.761 GAWSB [ default INFO ] Loading last known ledger 2019 - 03 - 04T04:43:12.763 GAWSB [ Ledger WARNING ] Some buckets are missing in ' buckets ' . 2019 - 03 - 04T04:43:12.763 GAWSB [ Ledger WARNING ] Attempting to recover from the history store . 2019 - 03 - 04T04:43:12.763 GAWSB [ History INFO ] Starting RepairMissingBucketsWork 2019 - 03 - 04T04:43:12.818 GAWSB [ Process WARNING ] process 118 exited 1 : cp /tmp / stellar - core / history / vs / bucket/61/83/ea / bucket-6183ea169e9b74480c793417fa42694cc961df5e47d4ca9e9a89164ceadccfa8.xdr.gz buckets / tmp / repair - buckets-7460063929a1b56d / bucket/61/83/ea / bucket-6183ea169e9b74480c793417fa42694cc961df5e47d4ca9e9a89164ceadccfa8.xdr.gz.tmp 2019 - 03 - 04T04:43:12.819 GAWSB [ Work WARNING ] Reached retry limit 0 for get - remote - file bucket/61/83/ea / bucket-6183ea169e9b74480c793417fa42694cc961df5e47d4ca9e9a89164ceadccfa8.xdr.gz 2019 - 03 - 04T04:43:12.819 GAWSB [ Work WARNING ] Scheduling retry # 1/32 in 1 sec , for get - and - unzip - remote - file bucket/61/83/ea / bucket-6183ea169e9b74480c793417fa42694cc961df5e47d4ca9e9a89164ceadccfa8.xdr.gz 2019 - 03 - 04T04:43:13.829 GAWSB [ Process WARNING ] process 122 exited 1 : cp /tmp / stellar - core / history / vs / bucket/61/83/ea / bucket-6183ea169e9b74480c793417fa42694cc961df5e47d4ca9e9a89164ceadccfa8.xdr.gz buckets / tmp / repair - buckets-7460063929a1b56d / bucket/61/83/ea / bucket-6183ea169e9b74480c793417fa42694cc961df5e47d4ca9e9a89164ceadccfa8.xdr.gz.tmp 2019 - 03 - 04T04:43:13.829 GAWSB [ Work WARNING ] Reached retry limit 0 for get - remote - file bucket/61/83/ea / bucket-6183ea169e9b74480c793417fa42694cc961df5e47d4ca9e9a89164ceadccfa8.xdr.gz ` ` ` These buckets do n't exist and I presume it is because the container was killed in the middle of a bucket write . There are issues [ here][1 ] and [ here][2 ] which seem to suggest the only fix for this is to run newdb which is not going to work in production ... Is there a clean way of stopping Stellar Core on a container shutdown so that this issue is avoided ? [ 1 ] : https://github.com/stellar/stellar-core/issues/1395 [ 2 ] : https://github.com/stellar/stellar-core/issues/1518
context_44
As the title suggests . I am trying to send XLM from my Ledger Blue via https://www.stellar.org/account-viewer/#!/dashboard . I have done this numerous times in the past and not had an issue . My firmware and software are up to date on the ledger blue . I am using Chrome . I verified the memo and address I am sending it to and I get to the screen and it says " Check the transaction using your Ledger device . Submitting transaction to the network ... " with the picture of the rocket and I click submit on the Ledger blue and then nothing else happens ? I have also had the following error : " type " : " https://stellar.org/horizon-errors/transaction_failed " , " title " : " Transaction Failed " , " status " : 400 , " detail " : " The transaction failed when submitted to the stellar network . The ` extras.result_codes ` field on this response contains further details . Descriptions of each code can be found at : https://www.stellar.org/developers/learn/concepts/list-of-operations.html " , " extras " : { " envelope_xdr " : " AAAAAN7JMvKZafqQZzRd41XrftTxZDFnFxfquZrzVf+S7c/2AAAAZAD / sl0AAAABAAAAAQAAAAAAAAAAAAAAAFy9yu0AAAACAAAAAD7iElEAAAABAAAAAAAAAAEAAAAADq+QhtWseqhtnwRIFyZRdLMOVtIqzkujfzUQ22rwZuEAAAAAAAAB0aixiYAAAAAAAAAAAZLtz / YAAABA+qSn3Z7nCjEs3qL/512XdzwHSeesRZrmYg5IWVkRfP39D07K8f4QeNnSREDpjXxT+cRUElMM6CiyrBgX1cm / AA== " , " result_codes " : { " transaction " : " tx_bad_auth " } , " result_xdr " : " AAAAAAAAAGT////6AAAAAA== " } } " I was told the error ( when I get it 50/50 ) was a " signature issue " . I floated this on Reddit and tried all of the suggestions there : Swapped out USB connector cable , complete recovery mode / reboot of Ledgerblue , reinstall firmware / software , empty cache on Chrome , tried using Opera . I am using a MAC with the latest OS . Thanks very much for any assistance you can provide !
context_45
I am having some issues with an application I 'm trying to build . I am new to JS . What I am trying to accomplish : Sending 2 variables ( wallet , amount ) to the function transferLumens. These 2 variables come from another function where amount and wallet are stored in a list . This list is looped through to get wallet and amount thrown to transferLumens function . I checked the passing variables and they are valid . The payment 's go through sometimes but it fails more than anything . The error code is : Error : Request failed with status code 400 The guide for Stellar Developers , [ sequenceNumbers][1 ] was some what helpful . How do I get the currenct sequence number before a transaction submit ? Any guidance is much appreciated . function transferLumens(wallet , amount ) { var StellarSdk = require('stellar - sdk ' ) ; var server = new StellarSdk . Server('https://horizon - testnet.stellar.org ' ) ; StellarSdk . Network.useTestNetwork ( ) ; var sourceKeys = StellarSdk . Keypair .fromSecret('SECRETKEY ' ) ; //var testamount = " 2.4 " ; server.loadAccount(wallet ) .catch(StellarSdk . NotFoundError , function ( error ) { throw new Error('The destination account does not exist ! ' ) ; } ) .then(function ( ) { return server.loadAccount(sourceKeys.publicKey ( ) ) ; } ) .then(function ( sourceAccount ) { transaction = new StellarSdk . TransactionBuilder(sourceAccount ) .addOperation(StellarSdk . Operation.payment ( { destination : wallet , asset : StellarSdk . Asset.native ( ) , amount : amount } ) ) .addMemo(StellarSdk . Memo.text("TEST PAYMENT " ) ) .build ( ) ; transaction.sign(sourceKeys ) ; // console.log(transaction.toEnvelope().toXDR('base64 ' ) ) ; return server.submitTransaction(transaction ) ; } ) .then(function ( result ) { console.log('Success ! Results : ' , result ) ; } ) .catch(function ( error ) { console.error('Something went wrong ! ' , error ) ; } ) ; } [ 1 ] : https://www.stellar.org/developers/js-stellar-base/reference/building-transactions.html#sequence-numbers
context_46
I build a docker image from the [ v0.17.0 tag of docker - stellar - core - horizon][1]. Then , I created a local standalone network from that image : docker run --rm -d -p " 8000:8000 " -p " 11626:11626 " \ -p " 5433:5432 " --name stellar synesso / stellar : v0.17.0 --standalone I create and fund a few accounts , and make a non - native payment . $ curl -s " http://localhost:8000/accounts / GBZX ... MADI / payments " | \ jq ' ._embedded.records[3 ] | del(._links ) ' { " i d " : " 25769807878 " , " paging_token " : " 25769807878 " , " transaction_successful " : true , " source_account " : " GBZXN7PIRZGNMHGA7MUUUF4GWPY5AYPV6LY4UV2GL6VJGIQRXFDNMADI " , " type " : " path_payment " , " type_i " : 2 , " created_at " : " 2019 - 02 - 27T04:40:11Z " , " transaction_hash " : " c5e29c7d19c8af4fa932e6bd3214397a6f20041bc0234dacaac66bf155c02ae9 " , " asset_type " : " credit_alphanum12 " , " asset_code " : " Chinchilla " , " asset_issuer " : " GAAYHQF2PNZ3H6LE5AX3UJSGUR2DQXHHGXYMHF32TDYF2FFPTTOFI3PA " , " from " : " GBZXN7PIRZGNMHGA7MUUUF4GWPY5AYPV6LY4UV2GL6VJGIQRXFDNMADI " , " to " : " GCYTIVTAEF6AJOZG5TVXE7OZE7FLUXJUJSYAZ3IR2YH4MNINDJJX4DXF " , " amount " : " 0.0000001 " , " path " : [ ] , " source_amount " : " 0.0000001 " , " source_max " : " 0.0000001 " , " source_asset_type " : " credit_alphanum12 " , " source_asset_code " : " Chinchilla " , " source_asset_issuer " : " GBZXN7PIRZGNMHGA7MUUUF4GWPY5AYPV6LY4UV2GL6VJGIQRXFDNMADI " } In the Horizon DB , the ` history_assets ` table contains the asset : " id","asset_type","asset_code","asset_issuer " 1,"credit_alphanum12","Chinchilla","GAAYHQF2PNZ3H6LE5AX3UJSGUR2DQXHHGXYMHF32TDYF2FFPTTOFI3PA " But the ` asset_stats ` table contains nothing . Because [ the query to load assets joins these two tables][2 ] ( I might be looking in the wrong place ) , the call to ` /assets ` returns a 404 . curl " http://localhost:8000/assets " { " type " : " https://stellar.org/horizon-errors/not_found " , " title " : " Resource Missing " , " status " : 404 , " detail " : " The resource at the url requested was not found . This is usually occurs for one of two reasons : The url requested is not valid , or no data in our database could be found with the parameters provided . " } Most likely , I am missing some important config or setup step . But I ca n't figure out what it is . [ 1 ] : https://github.com/stellar/docker-stellar-core-horizon/tree/28616e09b2464b802a2e478b2504cd38b211de29 [ 2 ] : https://github.com/stellar/go/blob/3d2c1defe73dbfed00146ebe0e8d7e07ce4bb1b6/services/horizon/internal/db2/assets/asset_stat.go#L60-L72
context_47
I need to retrieve image dynamically from broker which are published as dynamic image components .i am not sure about the model i have tried to repeat the same way the default image model is created by but no luck not getting the image object although metadata is available . Please suggest
context_48
We are using DD4 T with SDL Tridion 2013 . I am facing an issue with doc and xls files . I have published the files and the files are visible in the broker db . In the website , the links also carry the appropriate relative URL . When i click the link , it throws file not found exception . In the same page i have Pdf , which works completely fine . Is there any routes specific to doc and xls ? because i am getting the following error . [ HttpException ] : Page can not be found at DD4T.Mvc . Controllers . TridionControllerBase . Page(String pageId ) at IHS.Tridion . ContentDelivery . Controllers . PageController . Page(String pageId ) at lambda_method(Closure , ControllerBase , Object [ ] ) Thanks in advance .
context_49
Working on a Database refresh in SDL Tridion Sites 8.5 * * Prework * * 1 . Copy Databases down from source . - CM DB , Broker DB 's , Discovery DB 's 3 . Bring down all containers for CIS 4 . Modify Dockerfile ( ie Dockerfile.live.discovery ) to updated ( target ) SQL Server DB credentials * * TopMan * * 4 . Get - TtmCdEnvironment 5 . Disable TopMan CdEnv 6 . Export TopMan CdStructure 7 . Import TopMan CdStructure Then try to run ` Sync - TtmCdEnvironment ` and pass a CdEnvId 's , I get ` OAuth ` Error below PS C:\Windows\system32 > Sync - TtmCdEnvironment cmdlet Sync - TtmCdEnvironment at command pipeline position 1 Supply values for the following parameters : ( Type ! ? for Help . ) I d : ! ? ID of the CdEnvironment I d : Live_CdEnvId Sync - TtmCdEnvironment : Unable to synchronize item of type ' CdEnvironmentData ' with i d ' Live_CdEnvId ' . One or more errors occurred . An error occurred while processing this request . { " error":"invalid_grant " } At line:1 char:1 + Sync - TtmCdEnvironment + ~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified : ( Tridion . Topolog ... ironmentCommand : SyncEnvironmentCommand ) [ Sync - TtmCdEnvironment ] , DataServiceException + FullyQualifiedErrorId : Tridion . TopologyManager . Automation . Cmdlets . SyncEnvironmentCommand However , I validate in Postman and everything looks good . > http://server.com:8082/discovery.svc TridionWebDiscovery WebApplications PublicationMappings Environment Question - is there somewhere in the ` Sync - TtmCdEnvironment ` cmdlt that I can pass in the OAuth credentials or how does this work exactly ?
context_50
I am currently adding Multimedia Images to the RTF via the insert tool bar . When the GUI pop up appears and I select an image , I am given the choice for ' Advanced ' properties . For height and width specifically , the fields are populated with the uploaded images pixels . My issue is : The image does not adhere to custom height and width values and when the site is loaded the image is loaded via the binary content url with the data - aspect of 1.62 . My model has the property type for the field of " RichText " and the template calls the value with the helper " Html . DxaRichText(Model . Field ) " My Question is : How does the Dxa RichText helper process these style elements ? I got as far as the RichTextProcessor as well as some other html extension bits but could n't quite sort it out ( as of yet ! ) . My initial thinking : It seems that Dxa was intended to approach images as handling them responsively , leveraging some of the resizing and helpers where elements can be declared as parameters . So either this approach for the ' Advanced ' values provided in the GUI were left to be implemented by us or the helpers implemented are stripping this out . Based on this post : http://tridion.stackexchange.com/questions/11584/prevent-style-tags-from-being-removed-by-rtf-fields I am inclined to think the same thing is happening , although I find it odd it 's not accepted nowadays seeing as the CMS GUI inputs these values into the RTF . Side note : When the property is not of RichText type , the image scales to 100 % .
context_51
We have issue where our database transactions get locked during mass publishing on live . We are running broker db in mssql . We have following configuration in cd_deployer_conf.xml file : We also do publishing on 2 destinations . What we observe is that our transactions get locked in database during mass publishing on both destinations , and transactions just stop changing states . What we need to do then is remote to DB server and manually kill db transactions . We contacted SDL support and they suggest decreasing number of workers , which is unacceptable to us because of performance . Has anyone observed similar issue and how they fixed it ? Thanks in advance .
context_52
I m getting this error in core Log file while running Tridion reference implementation website . Stack trace of the error is here java.lang.ExceptionInInitializerError at com.tridion.meta.BinaryMetaFactory.getMetaByURL(BinaryMetaFactory.java:271 ) Caused by : java.lang.RuntimeException : Fatal error , unable to load the StorageManagerFactory at com.tridion.storage.StorageManagerFactory.reloadInstance(StorageManagerFactory.java:91 ) at com.tridion.storage.StorageManagerFactory.(StorageManagerFactory.java:56 ) ... 1 more Caused by : org.springframework.beans.factory.BeanCreationException : Error creating bean with name ' defaultdbEntityManagerFactory ' : Invocation of init method failed ; nested exception is java.lang.NoClassDefFoundError : org / hibernate / cfg / ObjectNameNormalizer at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1455 ) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519 ) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456 ) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:294 ) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:225 ) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:291 ) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193 ) at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1093 ) at com.tridion.storage.persistence.JPADAOFactory.configureBundle(JPADAOFactory.java:76 ) at com.tridion.storage.configuration.StorageFactoryConfigurationLoader.configureStorage(StorageFactoryConfigurationLoader.java:91 ) at com.tridion.storage.configuration.StorageFactoryConfigurationLoader.configureStorage(StorageFactoryConfigurationLoader.java:65 ) at com.tridion.storage.configuration.StorageFactoryConfigurationLoader.configure(StorageFactoryConfigurationLoader.java:51 ) at com.tridion.configuration.step.ConfigurationStepLoader.configure(ConfigurationStepLoader.java:47 ) at com.tridion.storage.StorageManagerFactory.configure(StorageManagerFactory.java:137 ) at com.tridion.services.BaseService.(BaseService.java:113 ) at com.tridion.storage.StorageManagerFactory.(StorageManagerFactory.java:104 ) at com.tridion.storage.StorageManagerFactory.reloadInstance(StorageManagerFactory.java:84 ) ... 2 more Caused by : java.lang.NoClassDefFoundError : org / hibernate / cfg / ObjectNameNormalizer at org.hibernate.ejb.Ejb3Configuration.(Ejb3Configuration.java:150 ) at org.hibernate.ejb.HibernatePersistence.createContainerEntityManagerFactory(HibernatePersistence.java:71 ) at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:257 ) at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.afterPropertiesSet(AbstractEntityManagerFactoryBean.java:310 ) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1514 ) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1452 ) ... 18 more Caused by : java.lang.ClassNotFoundException : org.hibernate.cfg.ObjectNameNormalizer at java.net.URLClassLoader$1.run(Unknown Source ) at java.net.URLClassLoader$1.run(Unknown Source ) at java.security.AccessController.doPrivileged(Native Method ) at java.net.URLClassLoader.findClass(Unknown Source ) at java.lang.ClassLoader.loadClass(Unknown Source ) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source ) at java.lang.ClassLoader.loadClass(Unknown Source ) ... 24 more Caused by : java.io.EOFException : Detect premature EOF at sun.misc.Resource.getBytes(Unknown Source ) at java.net.URLClassLoader.defineClass(Unknown Source ) at java.net.URLClassLoader.access$100(Unknown Source ) ... 31 more at Codemesh . JuggerNET.JavaClass . ThrowException(Exception ex ) at Codemesh . JuggerNET.NTypeValue . Throw(Int64 inst ) at Codemesh . JuggerNET.JavaClass . ThrowTypedException(Int64 inst ) at Codemesh . JuggerNET.JavaMethod . CallObject(JavaProxy jpo , JavaMethodArguments args ) at Com . Tridion . Meta . BinaryMetaFactory . GetMetaByURL(Int32 publicationId , String urlPath ) at Tridion . ContentDelivery . Meta . BinaryMetaFactory . GetMetaByUrl(Int32 publicationId , String urlPath ) at DD4T.Providers . SDLTridion2013.TridionBinaryProvider . GetLastPublishedDateByUrl(String url ) at Sdl . Web . DD4T.Statics . BinaryFileManager . ProcessUrl(String urlPath , Boolean cacheSinceLastRefresh , String physicalPath)`enter code here `
context_53
I would like to use the $ models object while hacking javascript in the Google Chrome Console . For example , I would like a 1 line sample of how to get the $ models object and then be able to : ` $ models.getItem("tcm:1-42 " ) `
context_54
I am working with the XPM in Tridion 2013 at the moment . I want to change the page template of one of my pages but when I click on the drop - down , there is an error notification popping up . It says : ` Unable to get template type with legacy i d : 1 . Please check that legacy pack is installed . Unable to get template type Unable to get template type with i d : 1 ` . The publication containing my page is created in 2011 and I think that this might be the problem . I made another publication after I 've installed 2013 and inside this publication everything is fine . There is no error when I try to change the template and the templates are shown in the list . Here is a short video of the described issue - http://screencast.com/t/Fl2qcV62lQ Does any of you guys had the same problem ? Any suggestions ? Thanks
context_55
getting error while running this script under Powershell : - [ ! [ enter image description here][1]][1 ] & ' \.quickinstall.ps1 -license FILE -enable - discovery -enable - deployer -enable - preview -enable - session ' and actual command ran = > PS C:\SDLWeb\cdinstall\resources\quickinstall > & ' \.quickinstall.ps1 C:\Installation\SDL Web 8\VMSDLWEB8\cd_licenses -enable - discovery -enable - deployer -enable - preview -enable - session ' where file path = > given the cd license path [ 1 ] : http://i.stack.imgur.com/vx5uk.png
context_56
we migrated an old ASP project from 5.3 to 2011 SP1 , and on one environment we can not get the dynamic linking working . The CD Linking has been installed as a windows service , which is running fine on the impacted server ( as described on http://stackoverflow.com/questions/12493407/getting-dynamic-linking-working-on-classic-asp-pages ) All config & license files are valid and put on the right place , but the error we get is related to the cd_broker_conf.xml , while the cd_storage_conf.xml is available . See logging below for more detailed information . Are we missing / overlooking something ? 2013 - 05 - 21 15:33:19,213 ERROR ASPBroker - Could not load Broker Configuration , using default values where possible . com.tridion.configuration.ConfigurationException : Ca n't find configuration file : [ cd_broker_conf.xml ] at com.tridion.configuration.XMLConfigurationReader.readConfiguration(XMLConfigurationReader.java:92 ) ~[cd_core.jar : na ] at com.tridion.Controller.loadConfiguration(Controller.java:404 ) ~[cd_core.jar : na ] at com.tridion.Controller.(Controller.java:116 ) ~[cd_core.jar : na ] at com.tridion.broker.Broker.(Broker.java:81 ) ~[cd_datalayer.jar : na ] at com.tridion.broker.ASPBroker.(ASPBroker.java:51 ) [ cd_datalayer.jar : na ] at com.tridion.broker.ASPBroker.getInstance(ASPBroker.java:63 ) [ cd_datalayer.jar : na ] at com.tridion.broker.ASPBroker.main(ASPBroker.java:132 ) [ cd_datalayer.jar : na ] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method ) ~[na:1.6.0_37 ] at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source ) ~[na:1.6.0_37 ] at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source ) ~[na:1.6.0_37 ] at java.lang.reflect.Method.invoke(Unknown Source ) ~[na:1.6.0_37 ] at com.tridion.jni.ServiceHandler.runMethod(ServiceHandler.java:954 ) [ cd_core.jar : na ] at com.tridion.jni.ServiceHandler.runStaticMethod(ServiceHandler.java:932 ) [ cd_core.jar : na ] at com.tridion.jni.ServiceHandler.runMain(ServiceHandler.java:916 ) [ cd_core.jar : na ] at com.tridion.jni.ServiceHandler.startServices(ServiceHandler.java:772 ) [ cd_core.jar : na ] at com.tridion.jni.ServiceHandler.(ServiceHandler.java:587 ) [ cd_core.jar : na ] at com.tridion.jni.ServiceHandler.(ServiceHandler.java:524 ) [ cd_core.jar : na ] at com.tridion.jni.ServiceHandler.main(ServiceHandler.java:136 ) [ cd_core.jar : na ]
context_57
I ’m having a bit of a nightmare trying to write a storage extension ( for Tridion 2013 ) . So far I have set up the following : - Running the deployer upload servlet in Tomcat via Eclipse . - Start the process , drop in a deployer zip package , and successfully deploy . So all is good with the default set up . Now I ’ve tried to extend the FSPageDAO class . I ’ve updated the cd_storage_conf.xml , and the referenced DAO bundle XML : The class itself does very little at the moment : package com.tridion.storage.extensions ; import java.io.File ; import java.io.IOException ; import org.slf4j.Logger ; import org.slf4j.LoggerFactory ; import com.tridion.broker.StorageException ; import com.tridion.data.CharacterData ; import com.tridion.storage.filesystem.FSEntityManager ; import com.tridion.storage.filesystem.FSPageDAO ; public class CustomFSPageDAO extends FSPageDAO { private Logger log = LoggerFactory.getLogger(CustomFSPageDAO.class ) ; public CustomFSPageDAO(String storageId , String storageName , File storageLocation , FSEntityManager entityManager ) { super(storageId , storageName , storageLocation , entityManager ) ; this.storageId = storageId ; log.debug("CustomFSPageDAO init . ( EM ) " ) ; } public CustomFSPageDAO(String storageId , String storageName , File storageLocation ) { super(storageId , storageName , storageLocation ) ; this.storageId = storageId ; log.debug("CustomFSPageDAO init . " ) ; } @Override public void create(CharacterData page , String relativePath ) throws StorageException { log.debug("Create . " ) ; super.create(page , relativePath ) ; } @Override public void update(CharacterData page , String originalRelativePath , String newRelativePath ) throws StorageException { log.debug("Update . " ) ; super.update(page , originalRelativePath , newRelativePath ) ; } @Override public void remove(int publicationId , int pageId , String relativePath ) throws StorageException { log.debug("Remove . " ) ; super.remove(publicationId , relativePath ) ; } } When debugging in Eclipse the http upload servlet fails to start , the following is logged : 2016 - 04 - 17 21:45:41,264 INFO BundleConfigurationLoader - Custom storage bindings defined , loading customDAOBundle.xml 2016 - 04 - 17 21:45:41,275 WARN BundleConfigurationLoader - CAUTION : Replaced ' Page ' for storage ' filesystem ' with ' com.tridion.storage.extensions.CustomFSPageDAO ' . 2016 - 04 - 17 21:45:41,278 INFO ConfigurationStepLoader - Executing configuration step : FactoryLoader 2016 - 04 - 17 21:45:41,323 ERROR StorageManagerFactory - Fatal error , unable to load the StorageManagerFactory java.lang.ClassCastException : The target class does not match the specified target type at com.tridion.util.ReflectionUtil.loadClassInstanceWithTypes(ReflectionUtil.java:83 ) ~[cd_core.jar : na ] at com.tridion.util.ReflectionUtil.loadClassInstance(ReflectionUtil.java:108 ) ~[cd_core.jar : na ] at com.tridion.storage.filesystem.FSDAOFactory.configureBundle(FSDAOFactory.java:74 ) ~[cd_datalayer.jar : na ] at com.tridion.storage.configuration.StorageFactoryConfigurationLoader.configureStorage(StorageFactoryConfigurationLoader.java:91 ) ~[cd_datalayer.jar : na ] at com.tridion.storage.configuration.StorageFactoryConfigurationLoader.configureStorage(StorageFactoryConfigurationLoader.java:65 ) ~[cd_datalayer.jar : na ] at com.tridion.storage.configuration.StorageFactoryConfigurationLoader.configure(StorageFactoryConfigurationLoader.java:51 ) ~[cd_datalayer.jar : na ] at com.tridion.configuration.step.ConfigurationStepLoader.configure(ConfigurationStepLoader.java:47 ) ~[cd_core.jar : na ] at com.tridion.storage.StorageManagerFactory.configure(StorageManagerFactory.java:137 ) [ cd_datalayer.jar : na ] at com.tridion.services.BaseService.(BaseService.java:113 ) ~[cd_core.jar : na ] at com.tridion.storage.StorageManagerFactory.(StorageManagerFactory.java:104 ) [ cd_datalayer.jar : na ] at com.tridion.storage.StorageManagerFactory.reloadInstance(StorageManagerFactory.java:84 ) [ cd_datalayer.jar : na ] at com.tridion.storage.StorageManagerFactory.(StorageManagerFactory.java:56 ) [ cd_datalayer.jar : na ] at com.tridion.transport.HTTPSReceiverServlet.init(HTTPSReceiverServlet.java:86 ) [ cd_upload.jar : na ] at org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1238 ) [ catalina.jar:8.0.33 ] at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1151 ) [ catalina.jar:8.0.33 ] at org.apache.catalina.core.StandardWrapper.allocate(StandardWrapper.java:828 ) [ catalina.jar:8.0.33 ] at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:135 ) [ catalina.jar:8.0.33 ] at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:106 ) [ catalina.jar:8.0.33 ] at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502 ) [ catalina.jar:8.0.33 ] at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141 ) [ catalina.jar:8.0.33 ] at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79 ) [ catalina.jar:8.0.33 ] at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:616 ) [ catalina.jar:8.0.33 ] at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88 ) [ catalina.jar:8.0.33 ] at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:522 ) [ catalina.jar:8.0.33 ] at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1095 ) [ tomcat-coyote.jar:8.0.33 ] at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:672 ) [ tomcat-coyote.jar:8.0.33 ] at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1502 ) [ tomcat-coyote.jar:8.0.33 ] at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1458 ) [ tomcat-coyote.jar:8.0.33 ] at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source ) [ na:1.7.0_60 ] at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source ) [ na:1.7.0_60 ] at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61 ) [ tomcat-util.jar:8.0.33 ] at java.lang.Thread.run(Unknown Source ) [ na:1.7.0_60 ] What is odd , is that I can export this same code from Eclipse , and drop the jar into a http upload site hosted under IIS . Using the same configuration , the class is picked up without issue , and everything logs out as expected . This would be fine , but it 's not a particularly efficient development workflow . Unfortunately Java is n't my strong spot , but I assume it 's more an issue with the tooling , as the same code functions under IIS . Any ideas ?
context_58
On a Web8 XPM set up using DXA 1.2 I am getting the following error from Update Preview : > Unable to update the changes using OData Service . The remote server > returned an error : ( 406 ) Not Acceptable . I am using Legacy Publishing for a Proof of Concept . I 've tried changing the URLs for the Content service and the token service . If I remove the Token Service URL from the Publication target I get > Unable to update the changes using OData Service . Unable to get Access > Token for OData Service . One or more errors occurred . > { " timestamp":1455833422115,"status":404,"error":"Not > Found","message":"No message available","path":"/access_token.svc " } So I assume OAuth is actually enabled . I 've checked that the cd_ambient_conf.xml has OAuth enabled set to true . Any ideas ?
context_59
I have upgreaded DXA 1.5 to DXA 1.8 , Java 1.7 to 1.8 . deploying in tomcat 9 server . the " BinaryData " files is not crating under \dxaWeb\BinaryData\publication - id\system\config\ when I build dxaWeb app . so I 'm getting null exception . I have published all pages(under my site SG ) Can you please help me to fix the below issue ? 2018 - 12 - 17 11:03:29,982 DEBUG LocalizationFactoryImpl - createLocalization : [ 826 ] / 2018 - 12 - 17 11:03:29,985 DEBUG DynamicMetaRetrieverImpl - Searching for binaryMeta for url='http://localhost:9080/system / config/_all.json ' . 2018 - 12 - 17 11:03:30,009 DEBUG DefaultODataClient - Getting entity for query ODataClientQuery[GetDynamicMetaFunctionImport(Url='http%253A%252F%252Flocalhost%253A9080%252Fsystem%252Fconfig%252F_all.json',ItemType='BinaryMeta ' ) ] 2018 - 12 - 17 11:03:30,009 DEBUG BasicEndpointCaller - Preparing the call endpoint for given url : http://domain:8081/client / v4/content.svc / GetDynamicMetaFunctionImport(Url='http%253A%252F%252Flocalhost%253A9080%252Fsystem%252Fconfig%252F_all.json',ItemType='BinaryMeta ' ) 2018 - 12 - 17 11:03:30,112 DEBUG BasicEndpointCaller - Request ended with 200 status code . 2018 - 12 - 17 11:03:30,113 DEBUG AtomEntityUnmarshaller - Unmarshalling entity for query : ODataClientQuery[GetDynamicMetaFunctionImport(Url='http%253A%252F%252Flocalhost%253A9080%252Fsystem%252Fconfig%252F_all.json',ItemType='BinaryMeta ' ) ] 2018 - 12 - 17 11:03:30,114 DEBUG DynamicMetaRetrieverImpl - Retrieved BinaryMeta instance : [ BinaryMeta tcd : pub[826]/variant[config - bootstrap]/binarymeta[216112 ] , application / json , /system / config/_all.json , /system / config/_all.json ] 2018 - 12 - 17 11:03:30,140 DEBUG DefaultODataClient - Getting entity for query ODataClientQuery[GetComponentMetaFunctionImport(ComponentId=216112,PublicationId=826 ) ] 2018 - 12 - 17 11:03:30,141 DEBUG BasicEndpointCaller - Preparing the call endpoint for given url : http://domain:8081/client / v4/content.svc / GetComponentMetaFunctionImport(ComponentId=216112,PublicationId=826 ) 2018 - 12 - 17 11:03:30,176 DEBUG BasicEndpointCaller - Request ended with 200 status code . 2018 - 12 - 17 11:03:30,176 DEBUG AtomEntityUnmarshaller - Unmarshalling entity for query : ODataClientQuery[GetComponentMetaFunctionImport(ComponentId=216112,PublicationId=826 ) ] 2018 - 12 - 17 11:03:30,179 DEBUG WebComponentMetaFactoryImpl - Retrieved ComponentMeta instance : com.sdl.web.model.ComponentMetaImpl@25ae51ff 2018 - 12 - 17 11:03:30,180 DEBUG BinaryContentRetrieverImpl - Searching for binaryData for publicationId='826 ' , binaryId='216112 ' , variantId='config - bootstrap ' . 2018 - 12 - 17 11:03:30,188 DEBUG DefaultODataClient - Getting entity for query ODataClientQuery[BinaryContents%28PublicationId%3D826%2CBinaryId%3D216112%2CVariantId%3DY29uZmlnLWJvb3RzdHJhcA%3D%3D%2CStreamContent%3Dfalse%29 ] 2018 - 12 - 17 11:03:30,189 DEBUG BasicEndpointCaller - Preparing the call endpoint for given url : http://domain:8081/client / v2/content.svc / BinaryContents%28PublicationId%3D826%2CBinaryId%3D216112%2CVariantId%3DY29uZmlnLWJvb3RzdHJhcA%3D%3D%2CStreamContent%3Dfalse%29 2018 - 12 - 17 11:03:30,228 DEBUG BasicEndpointCaller - Request ended with 200 status code . 2018 - 12 - 17 11:03:30,228 DEBUG AtomEntityUnmarshaller - Unmarshalling entity for query : ODataClientQuery[BinaryContents%28PublicationId%3D826%2CBinaryId%3D216112%2CVariantId%3DY29uZmlnLWJvb3RzdHJhcA%3D%3D%2CStreamContent%3Dfalse%29 ] 2018 - 12 - 17 11:03:30,229 DEBUG DefaultContentProvider - Writing binary content to file : C:\Program Files\Apache Software Foundation\Tomcat 9.0\wtpwebapps\dxaWeb\BinaryData\826\system\config\_all.json Dec 17 , 2018 11:03:30 AM org.apache.catalina.core.StandardWrapperValve invoke SEVERE : Servlet.service ( ) for servlet [ org.springframework.web.servlet.DispatcherServlet ] in context with path [ ] threw exception [ Request processing failed ; nested exception is java.lang.NullPointerException ] with root cause java.lang.NullPointerException at com.sdl.webapp.tridion.mapping.DefaultContentProvider.getStaticContentFile(DefaultContentProvider.java:95 ) at com.sdl.webapp.tridion.mapping.AbstractDefaultContentProvider.getStaticContentFile(AbstractDefaultContentProvider.java:366 ) at com.sdl.webapp.tridion.mapping.AbstractDefaultContentProvider.getStaticContent(AbstractDefaultContentProvider.java:309 ) at com.sdl.webapp.common.impl.localization.LocalizationFactoryImpl.parseJsonFileTree(LocalizationFactoryImpl.java:239 ) at com.sdl.webapp.common.impl.localization.LocalizationFactoryImpl.loadMainConfiguration(LocalizationFactoryImpl.java:108 ) at com.sdl.webapp.common.impl.localization.LocalizationFactoryImpl.createLocalization(LocalizationFactoryImpl.java:84 ) at com.sdl.webapp.tridion.AbstractTridionLocalizationResolver.createLocalization(AbstractTridionLocalizationResolver.java:101 ) at com.sdl.webapp.tridion.AbstractTridionLocalizationResolver.getLocalization(AbstractTridionLocalizationResolver.java:68 ) at com.sdl.webapp.common.impl.WebRequestContextImpl.localization(WebRequestContextImpl.java:205 ) at com.sdl.webapp.common.impl.WebRequestContextImpl.getLocalization(WebRequestContextImpl.java:85 ) at com.sdl.webapp.common.impl.WebRequestContextImpl$$FastClassByCGLIB$$2bfec188.invoke( ) at net.sf.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204 ) at org.springframework.aop.framework.Cglib2AopProxy$CglibMethodInvocation.invokeJoinpoint(Cglib2AopProxy.java:689 ) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150 ) at org.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:131 ) at org.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:119 ) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172 ) at org.springframework.aop.framework.Cglib2AopProxy$DynamicAdvisedInterceptor.intercept(Cglib2AopProxy.java:622 ) at com.sdl.webapp.common.impl.WebRequestContextImpl$$EnhancerByCGLIB$$7633fbaa.getLocalization( ) at com.sdl.webapp.common.impl.interceptor.StaticContentInterceptor.preHandle(StaticContentInterceptor.java:102 ) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:914 ) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:852 ) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:882 ) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:778 ) at javax.servlet.http.HttpServlet.service(HttpServlet.java:634 ) at javax.servlet.http.HttpServlet.service(HttpServlet.java:741 ) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231 ) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166 ) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53 ) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193 ) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166 ) at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:88 ) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:76 ) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193 ) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166 ) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199 ) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96 ) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:490 ) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139 ) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92 ) at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:668 ) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74 ) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343 ) at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:408 ) at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66 ) at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:770 ) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1415 ) at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49 ) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149 ) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624 ) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61 ) at java.lang.Thread.run(Thread.java:748 )
context_60
I am trying to publish some pages but it is getting failed every time . I checked the logs ( event viewer / database / publisher ) but have not found any indication . Same error happens when trying to preview the content of page . Please suggest Following is the error sample : ( 2147747185 ) Object reference not set to an instance of an object . Unable to get rendered content of Page ( tcm:23 - 20146 - 64 ) . Unable to retrieve rendered data from Page . at Ricoh . TemplateBuildingBlocks . GetWebDavUris . Transform(Engine engine , Package package ) at Tridion . ContentManager . Templating . Assembly . AssemblyMediator . Transform(Engine engine , Template template , Package package ) at Tridion . ContentManager . Templating . Assembly . CSharpSourceCodeMediator . RunTemplate(Engine engine , Package package , String templateUri , String className ) at Tridion . Templating . CSharpTemplate . CSharpSourceTemplate . Transform(Engine _ _ engine , Package _ _ package ) at Tridion . ContentManager . Templating . Assembly . CSharpSourceCodeMediator . Transform(Engine engine , Template template , Package package ) at Tridion . ContentManager . Templating . Engine . ExecuteTemplate(Template template , Package package ) at Tridion . ContentManager . Templating . Engine . InvokeTemplate(Package package , TemplateInvocation templateInvocation , Template template ) at Tridion . ContentManager . Templating . Compound . CompoundTemplateMediator . Transform(Engine engine , Template templateToTransform , Package package ) at Tridion . ContentManager . Templating . Engine . ExecuteTemplate(Template template , Package package ) at Tridion . ContentManager . Templating . Engine . InvokeTemplate(Package package , TemplateInvocation templateInvocation , Template template ) at Tridion . ContentManager . Templating . Engine . TransformPackage(Template template , Package package ) at Tridion . ContentManager . Templating . Engine . TransformItem(Template template , IdentifiableObject itemToRender ) at Tridion . ContentManager . Templating . TemplatingRenderer . Render(ResolvedItem resolvedItem , PublishInstruction instruction , PublicationTarget target , RenderedItem renderedItem , RenderContext renderContext ) at Tridion . ContentManager . Publishing . Rendering . RenderEngine . Render(ResolvedItem resolvedItem , PublishInstruction instruction , PublicationTarget target , RenderContext context ) at Tridion . ContentManager . Publishing . Rendering . RenderEngine . Render(IdentifiableObject item , Template template , PublishInstruction instruction , PublicationTarget target , RenderContext context ) at Tridion . ContentManager . Publishing . Rendering . ComWrapper . RenderEngineFacade . RenderItemWithTemplate(IdentifiableObject item , Template template , String legacyRenderInstruction ) at Tridion . ContentManager . Publishing . Rendering . ComWrapper . RenderEngineFacade . RenderPage(UserContext userContext , String pageXml , String pageTemplateXml , String instruction ) UtilitiesPublish . RenderPage Page . Render Request . Render
context_61
I am trying to publish an xml page where i am updating the " Output " of the page created in default finish action using a c # tbb . I am able to preview the page in Tridion but while publishing it is getting failed with the following error The number of render failures has exceeded its specified failure limit of 0 Error in event viewer in Tridion Template is as below Hof . Tridion . BuildingBlocks . PageTemplates . GenerateMegaMenuXml . Transform(Engine engine , Package package ) Component : Templating Errorcode : 0 User : NT AUTHORITY\SYSTEM StackTrace Information Details : at Hof . Tridion . BuildingBlocks . PageTemplates . GenerateMegaMenuXml . Transform(Engine engine , Package package ) at Tridion . ContentManager . Templating . Assembly . AssemblyMediator . Transform(Engine engine , Template template , Package package ) at Tridion . ContentManager . Templating . Assembly . CSharpSourceCodeMediator . RunTemplate(Engine engine , Package package , String templateUri , String className ) at Tridion . Templating . CSharpTemplate . CSharpSourceTemplate . Transform(Engine _ _ engine , Package _ _ package ) at Tridion . ContentManager . Templating . Assembly . CSharpSourceCodeMediator . Transform(Engine engine , Template template , Package package ) at Tridion . ContentManager . Templating . Engine . ExecuteTemplate(Template template , Package package ) at Tridion . ContentManager . Templating . Engine . InvokeTemplate(Package package , TemplateInvocation templateInvocation , Template template ) at Tridion . ContentManager . Templating . Compound . CompoundTemplateMediator . Transform(Engine engine , Template templateToTransform , Package package ) at Tridion . ContentManager . Templating . Engine . ExecuteTemplate(Template template , Package package ) at Tridion . ContentManager . Templating . Engine . InvokeTemplate(Package package , TemplateInvocation templateInvocation , Template template ) at Tridion . ContentManager . Templating . Engine . TransformItem(Template template , IdentifiableObject itemToRender ) at Tridion . ContentManager . Templating . TemplatingRenderer . Render(ResolvedItem resolvedItem , PublishInstruction instruction , PublicationTarget target , RenderedItem renderedItem , RenderContext renderContext ) Can anyone suggest where else to check the exact root cause of issue .
context_62
We are facing an issue with Tridion 2009 CME . As the TCM is getting slow very frequently . Not able to open any page / component and GUI getting irresponsive . I have checked the logs ( TCM / services logs / Event Viewer ) and database logs as well but I did not found anything . I have also restarted the IIS web host for TCM gui . We are restarting ( 3 - 4 times a day ) the com+ and services but this also solves the problem for time being . Please suggest areas to look into apart from these .
context_63
We have installed Content service , Discovery service , and DXA Model service . Since Auto registration for DXA Model service is failing as per DXA Model service GIT Hub documentation added the role directly in CD storage config for discovery service and Content service as follows While executing the sample application through Visual Studio , We are getting the below error > DXA Model Service is not registered ; no extension property called > ' dxa - model - service ' found on Content Service Capability . Can you please suggest If we are missing any configurations ? Used SDL Documentation and referred below URLs https://velmuruganarjunan.wordpress.com/category/dxa/ https://community.sdl.com/product-groups/sdl-tridion-dx/tridion-sites/tridion-developer/b/weblog/posts/install-model-service-dxa-2-0-as-a-windows-service
context_64
Even after applying hotfix 1673 to Tridion 2013 SP1 ( replacing Tree.js and incrementing the " modification " value in System.config ) , users on Chrome 49.0.2623.87 ( 64-bit ) can not save components . Users clearing their browser cache does not seem to make a difference . The users who have reported the issue so far were using Macs . The error message in the console is different than what was reported [ in a similar question][1 ] prior to the hotfix : [ ! [ Chrome Screen Capture post Hotfix][2]][2 ] Does the hotfix not address this particular version of Chrome ? Or is there another step I should be taking to ensure the hotfix is applied ? [ 1 ] : http://tridion.stackexchange.com/questions/14107/on-chrome-49-component-including-link-field-cannot-be-saved/ [ 2 ] : http://i.stack.imgur.com/hrYx1.png
context_65
When publishing , we see items go to ` Waiting for deployment ` and then remain there . In the content delivery logging , we can see that deployment is complete , and in the transport log on the CM we can see that the deployment appears to be successful . On a working system , we can see that in the transport log , there are several polling attempts before finally a success status is reached , and the state file is remotely deleted . On our problematic system we see that success is reached on the first polling attempt . ( Both behaviours could be normal , perhaps depending on timing . ) The problem is that even though we can see success reported in the transport log , the item remains in the publish queue at ` Waiting for deployment ` . What is the cause likely to be ?
context_66
I try to Created new Structure with Metadata Keywords . Metadata values not load in the dropdown and got error Uncaught referenceerror $ om is not defined . Any please suggest to me .
context_67
I need to design a faceted search against a number of products . The content is tagged up with metadata using the C&K functionality . Normally I 'd use something like Elasticsearch or SOLR to deal with the faceting based on this metadata but in this case it 's not an option that 's available to me . Speaking to various people , I 'm hearing a general theme that the API is sub - optimal for doing this sort of work and you 'd have to construct the facets yourself from the taxonomy using many API calls . I appreciate the genericity of this question but has anyone successfully implemented this ? What are the pitfalls ? M
context_68
I am not able to have url of the image component , though a part of tcm - id of component is showing as follows : ` ` is coming as ` ` But when I use ` @Model . Url ` it returns null value . Also , I have already published another component having that image component , and its success also . What might be the issue ?
context_69
Backstory : server is in Europe , I 'm in the US . When attempting to create a new multimedia object in Tridion R5 , I click on " schema " to access the drop down and get the following error : Object does n't support property or method ' show ' The error also occurs when right clicking on an object The really strange part is that in Europe , the system works properly ( although not with my ID ) . Does Tridion ( or can it ) use other rights than what is set in Tridion itself ? It appears to me that perhaps the call to open the dropdown is accessing a resource that I do n't have access to . Is that a possibilty , or does anyone else have any ideas ? I do n't have access to the server itself , and our support has been sketchy at best , so I 'd like to try to point them in the right direction . any help would be greatly appreciated
context_70
I know its not possible to copy / paste between publications but our client has just spent two days creating structure groups and pages in a parent publication when they should have been in a child publication . Is there anyway to " move " these pages ? I just tried to be sneaky with Content Porter by exporting from the parent , renaming publications and importing but that reset the child publications blueprint settings and I could n't get them back ( ended up having to recreate the child publication again ) . I could write something clever with the core service but pushed for time . If I went to SDL Support is there a db script or something I can use ? Cheers
context_71
I am new to Tridion SDL . I am using DD4 T to implement Tridion SDL . Right now i have 2 servers ( in Datacenters ) 1 . Content Management ( Server A 2 . Content Delivery ( Server B ) Now i am creating DD4 T template in my local machine . In the DD4 T Web Site , i have copied all lib and config directory . Now i am getting a runtime Error . [ NullReferenceException : Object reference not set to an instance of an object . ] DD4T.Providers . SDLTridion2013.TridionPageProvider . GetContentByUrl(String Url ) +547 DD4T.Factories . PageFactory . TryFindPage(String url , IPage & page ) +557 DD4T.Mvc . Controllers . TridionControllerBase . GetModelForPage(String PageId ) +109 DD4T.Mvc . Controllers . TridionControllerBase . Page(String pageId ) +22 now my question is , 1 . How DD4 T application find the Content deliver server ( server b ) , which we never mentioned in any where ? 2 . How to solve the above exception . 3 . Can we provide Content delivery URL in DD4 t Website and connect to it from VIsual studio ? or do we need to setup Content delivery instance in my local host environment ? Please help me to get started .
context_72
I have a custom deployer extension and a configuration file with some of the parameters required for the extension . The custom extension checks if the component exists in a table in the database , if not , it is inserted , if yes then the LastUpdate date is changed . This is what the config file looks like In the java class , the code to get these parameters & establish the connection to the database is : String strServerName = config.getAttribute("ServerName " ) ; String strDataBaseName = config.getAttribute("DataBaseName " ) ; String strUserName = config.getAttribute("UserName " ) ; String strPassword = config.getAttribute("Password " ) ; Class.forName("com.microsoft.sqlserver.jdbc . SQLServerDriver " ) ; Connection cnnConnection = DriverManager.getConnection ( String.format("jdbc : sqlserver://%s;databaseName=%s " , strServerName , strDataBaseName ) , strUserName , strPassword ) ; However , whenever I deploy , this is the error that I get in the logs . ERROR CacheFlusher - com.tridion.configuration.ConfigurationException : No attribute found for : ServerName . What am I doing wrong ? Please help . Thanks in Advance .
context_73
We have a custom SSO implementation that we are using to secure the SDL Web 8 CME . For the most part it is working quite well . However , for understandable security reasons our SSO provider needs to re - authenticate after a period of X hours . We can set X to anything we like , but it ca n't be infinity . Some of our users leave their browser with the SDL Web 8 GUI open all week / month . If they leave it open longer than ' X ' , then the GUI stops responding ( and we start getting errors in the notification box of the CME ) . This is to be expected as the SSO authentication module ends up throwing 401 unauthorized responses on all requests to the CME ( and accompanying services ) . Refreshing the browser takes them to the SSO login page , and they can log in again . However this is not very intuitive , and our users get confused by the errors . To prevent this I would like to force the browser to redirect after a period of Y hours and send the user to our SSO login page . This way I could force them to login again before the time out occurs . It would also limit the security risk of users leaving their browsers open with a CME that can change our live websites . I understand we * should * tell our users to close their browser at the end of the day , but this is not a realistic solution . Does anyone have any suggestions or examples on how I can redirect a user in the CME to a specific external URL after a period of ' Y ' hours . An even better solution might be if the GUI ever receives a 401 , instead of outputting it in the notification area , it could redirect the whole browser to my SSO page . * * SIDE NOTE : * * Unfortunately Alchemy does not work with our SSO implementation right now , so we wo n't be able to use an Alchemy extension for this .
context_74
I 'm currently working with the legacy Business Connector in Tridion 2011 . I create a new Web Reference in my Visual Studio Project and run the following code var requestXml = ... ; var client = new com.myorganisation.tridion2011.BusinessConnector ( ) ; client . Credentials = new System . Net . NetworkCredential("myusername " , " mypassword " , " mydomain " ) ; var response = client.execute("default " , requestXml , null ) ; The response is LCould not load Java runtime libaries at C:\\Program Files ( x86)\\Java\\jre6\\bin\\client\\jvm.dll I guess something thinks I 'm running a 64bit version of Windows when in fact I 'm running 32 bit Windows 7 . The ` jvm.dll ` installed to ` C:\\Program Files\\Java\\jre6\\bin\\client\\jvm.dll ` . When I use the web service associated with my old version of Tridion ( 5.3 ) everything works fine , the response is as expected ( adds a new component ) var requestXml = ... ; var client = new com.myorganisation.tridion53.BusinessConnector ( ) ; client . Credentials = new System . Net . NetworkCredential("myusername " , " mypassword " , " mydomain " ) ; var response = client.execute("default " , requestXml , null ) ; Has something changed or is there a configuration setting I must modify ?
context_75
We moved the tridion DLL to GAC from the local webapp BIN directory . gacutil /i Tridion . ContentDelivery . AmbientData.dll Microsoft ( R ) .NET Global Assembly Cache Utility . Version 4.0.30319.1 Assembly successfully added to the cache However , the page is not loading and saying ` Could not load type ' Tridion . ContentDelivery . AmbientData . HttpModule ' . ` Is there anything else I need to do to make it work ?
context_76
I set up a CMS on AWS environment and after that I created the publication , target groups , schema , components , pages , CT , PT etc . But after creating the first publication I am not able to create any more publications and getting the below error ! [ enter image description here][1 ] [ 1 ] : http://i.stack.imgur.com/kJfe1.png
context_77
How would I add a field on the general tab of a group ?
context_78
While Publishing we are getting the below error > An error occurred while communicating with Topology Manager > ' http://localhost:81/ttm201601 ' When we re - publish multiple times , items are getting success . Unable to find any logs which reflects the error in CMS and Publisher servers . Any suggestions to resolve the above error .
context_79
I have a schema as follows : [ ! [ enter image description here][1]][1 ] As you can see it has some embedded fields which are optional ( Select List , for example ) . Some fields inside those embedded fields are mandatory . The component you are looking at , saves successfully via the CME . Now when I try to content port the component , Content Porter complains as follows : > ERROR [ 2015 - 10 - 13 16:19:02,717 ] DTAPTools:124 : Error - > 10/13/2015:4:19:01 PM - Tridion . ContentManager . InvalidXmlException : > XML validation error . Reason : The element ' selectList ' in namespace > ' http://www.sdl.com/web/schemas/core ' has invalid child element > ' options ' in namespace ' http://www.sdl.com/web/schemas/core ' . List of > possible elements expected : ' key ' in namespace > ' http://www.sdl.com/web/schemas/core ' . Obviously , my first thought was to synchronize the component and so I did : Base . CoreServiceBase sh = new Base . CoreServiceBase ( ) ; try { var options = new Core . SynchronizeOptions { SynchronizeFlags = Core . SynchronizeFlags . All } ; sh . OpenSession ( ) ; sh . Session . SynchronizeWithSchemaAndUpdate(schemaId , options ) ; } catch ( Exception ex ) { Console . Write(ex . Message ) ; } finally { sh . CloseSession ( ) ; } That code , gives me the same error as Content Porter . It seems like a bug to me since , if I fill in the mandatory fields within the optional embedded field , then the sync process runs just fine . Then If I removed them again ( keep in mind that the CME lets me save the component as expected ) and run the syn process again , it fails with the same error once again . Have you encounter anything similar ? Am I missing a hotfix ? Thanks , [ 1 ] : http://i.stack.imgur.com/piDef.png
context_80
We were doing some content entry and suddenly we see that one of component gave errors in saving . Now , the component is showing as locked in the CME . If we now try to do Open or CheckIn or Undo Checkout the component , we get errors as shown below : ! [ enter image description here][1 ] What I have already tried so far : - Tried check in , undo checkout , re - open but all ends up with error ; also the Rollback option is being shown as " disabled " - No Error or Warning is there in the Event Logs - If we open any other component ( even based on the same schema ) , it is opening without any issue . Other operations like Saving , check out , check in etc . are also working fine - This erroneous component was opening correctly few days ago - There does exists a Event DLL but have verified that neither it has got changed in recent time , nor there is any suspicious text that may cause such issue - Could not check anything in CM DB as do not have access to it Did anyone faced similar issue before and can suggest any thing else that can be checked for the root cause [ 1 ] : http://i.stack.imgur.com/9oT2T.png
context_81
Fun , easy question for those on SDL Tridion 2013 . In SDL Tridion 2011 , there was an executable to rebuild the Solr search collection . How do we re - index search in [ tag:2013 ] ( no peeking at the tags ) ?
context_82
I have a DD4 T 2011 site . I get a 404 for every URL & no requests are going to the page controller . Any ideas on what the issue could be ? runAllManagedModulesForAllRequests="true " is in web.config . Could this be a license issue ? I found the entry below in the log , so it looks like the license is ok . I do n't see any errors in my log 2014 - 08 - 05 14:15:20,460 DEBUG LicenseReader - Found license entry for Tridion CD Broker , trying to validate this license key 2014 - 08 - 05 14:15:20,460 DEBUG LicenseReader - There was no licenseLock Location specified , using user.home var : C:\ 2014 - 08 - 05 14:15:20,460 DEBUG LicenseReader - There was no licenseLock Location specified , using user.home var : C:\ 2014 - 08 - 05 14:15:20,460 DEBUG LicenseReader - Found options Tridion CD Broker = true 2014 - 08 - 05 14:15:20,460 DEBUG LicenseReader - Found Key Tridion CD Broker , Value true
context_83
Using Tridion Object Model API , what is the correct way to remove Component Process from a Tridion schema ? Assigning Schema . ComponentProcess to Nothing does not seem to be working . VBScript sample code is included below : testSchemaID = " tcm:3 - 123456 - 8 " Dim oSchema Set oSchema = TDSE.getObject(testSchemaID , 2 ) oSchema . ComponentProcess = Nothing oSchema . Save(True ) Set oSchema = Nothing
context_84
I am running a DXA 1.2 website with Tridion 2013 SP1 . Binaries are being pushed out as DCPs . When I publish a new binary , everything seems to be working as expected i.e ; item gets updated in the broker and the item is downloaded to the local filesystem ( ~\BinaryData\PUBLICATION_ID\images ) . I am having a peculiar issue with a binary that has been replaced in the CME with the same name and published to the broker . New page and updates to an existing page are working as expected . Resetting IIS or admin / refresh is not refreshing the newly updated Binary . The cache settings in my web.config are defaulted to the below . > > > > If I manually delete ( invalidate ) the file in my filesystem ( ~\BinaryData\PUBLICATION_ID\images ) , the newer component is being pulled from the broker . When modified items ( binary components ) are published to the broker , I can see LAST_PUBLISHED_DATE being refreshed in the broker database . SELECT * FROM ITEMS WHERE PUBLICATION_ID = MY_PUBLICATION_ID AND TITLE LIKE ' % MY_IMAGE_NAME% ' For some reason , the website is not picking up the CacheSettings from the broker database or Cache is not being invalidated in 5 seconds ( per the configuration above ) . Following are the logs from the Core log . Am I missing any configuration ? Is there a way to debug it any differently ? I am looking into our custom HTTPModule code to see if it is causing issues as well . 2016 - 05 - 11 13:56:52,071 DEBUG ClaimStore - put : uri = taf : request : uri , value=/images / MYIMAGE.png 2016 - 05 - 11 13:56:52,071 DEBUG ClaimStore - put : uri = taf : request : full_url , value = https://MYDOMAIN : MYPORT / images / MYIMAGE.png 2016 - 05 - 11 13:56:52,071 DEBUG ClaimStore - put : uri = taf : request : headers , value={sm_filterctxtptr=[Ljava.lang . String;@8aa59fb , cookie=[Ljava.lang . String;@67f9722e , cache - control=[Ljava.lang . String;@29b75f56 , sm_location=[Ljava.lang . String;@45d41c7f , connection=[Ljava.lang . String;@689af9a9 , host=[Ljava.lang . String;@167cfbe6 , accept - language=[Ljava.lang . String;@677dc492 , accept=[Ljava.lang . String;@71f48252 , user - agent=[Ljava.lang . String;@6b980ff0 , accept - encoding=[Ljava.lang . String;@4c3e8cbf , upgrade - insecure - requests=[Ljava.lang . String;@1e8a76b4 , pragma=[Ljava.lang . String;@14773316 } 2016 - 05 - 11 13:56:52,072 DEBUG ClaimStore - put : uri = taf : request : cookies , value={s_invisit = true , s_cc = true , s_ev21=%5B%5B'typed_bookmarked'%2C'1462988733231'%5D%2C%5B'other_traffic'%2C'1462988867685'%5D%2C%5B'other_traffic'%2C'1462988868126'%5D%2C%5B'other_traffic'%2C'1462988868496'%5D%2C%5B'other_traffic'%2C'1462988870020'%5D%2C%5B'other_traffic'%2C'1462988871487'%5D%2C%5B'other_traffic'%2C'1462989276372'%5D%2C%5B'other_traffic'%2C'1462989276732'%5D%2C%5B'other_traffic'%2C'1462989278238'%5D%2C%5B'other_traffic'%2C'1462989279156'%5D%5D , s_sq=%5B%5BB%5D%5D , s_fid=64E44A1D527ECE68 - 0EFF535B42268EA5 , gpv_pn = fcllc%3Apublic%3Ahomepage , TAFSessionId = tridion_2f644085 - 45da-4ed8 - 87c3-f27762e5d28a , ASP.NET_SessionId = agoyujv2aqmh2ihimptmsojf , TAFTrackingId = tridion_88bb3d08 - 888d-492b - a773 - 5f97d7080f8a , s_vnum=14650714MYPUBID30%26vn%3D12 } 2016 - 05 - 11 13:56:52,072 DEBUG ClaimStore - put : uri = taf : request : parameters , value={CONTENT_TYPE=[Ljava.lang . String;@1dbaf3d9 , QUERY_STRING=[Ljava.lang . String;@5cdf3ace } 2016 - 05 - 11 13:56:52,073 DEBUG ClaimStore - put : uri = taf : server : variables , value={REMOTE_USER= , PATH_TRANSLATED = D:\IIS_LOCATION\1WFW_MYPORT\images\MYIMAGE.png , SERVER_PORT = MYPORT , SCRIPT_NAME=/images / MYIMAGE.png , REMOTE_ADDR = REMOTE_IP , AUTH_TYPE= , SERVER_PROTOCOL = HTTP/1.1 , REQUEST_METHOD = GET , DOCUMENT_ROOT = D:\Inetpub\1WFW_MYPORT , REMOTE_HOST = MYIP , SERVER_NAME = MYDOMAIN , SECURE = false } 2016 - 05 - 11 13:56:52,073 DEBUG AmbientRuntime - Is http header processor enabled ? False 2016 - 05 - 11 13:56:52,073 DEBUG AmbientRuntime - Is http header processor enabled ? False 2016 - 05 - 11 13:56:52,073 DEBUG AmbientRuntime - Is http header processor enabled ? False 2016 - 05 - 11 13:56:52,073 DEBUG AmbientRuntime - Is http header processor enabled ? False 2016 - 05 - 11 13:56:52,073 DEBUG ClaimStore - put : uri = taf : session : id , value = tridion_2f644085 - 45da-4ed8 - 87c3-f27762e5d28a 2016 - 05 - 11 13:56:52,074 DEBUG ClaimStore - put : uri = taf : tracking : id , value = tridion_88bb3d08 - 888d-492b - a773 - 5f97d7080f8a 2016 - 05 - 11 13:56:52,074 DEBUG ClaimStore - put : uri = taf : session : attributes , value={Tridion . ContentDelivery . AmbientData . ClaimStore = Tridion . ContentDelivery . AmbientData . ClaimStore } 2016 - 05 - 11 13:56:52,074 DEBUG AmbientRuntime - Begin processing cookie claims . 2016 - 05 - 11 13:56:52,074 DEBUG AmbientRuntime - Begin processing cookie claims . 2016 - 05 - 11 13:56:52,074 DEBUG AmbientRuntime - Cookie forwarding is enabled : True 2016 - 05 - 11 13:56:52,074 DEBUG AmbientRuntime - Cookie forwarding is enabled : True 2016 - 05 - 11 13:56:52,075 DEBUG AmbientRuntime - Cookie forwarding for account is set to : False 2016 - 05 - 11 13:56:52,075 DEBUG AmbientRuntime - Cookie forwarding for account is set to : False 2016 - 05 - 11 13:56:52,075 DEBUG AmbientRuntime - IP address is in the white list : True 2016 - 05 - 11 13:56:52,075 DEBUG AmbientRuntime - IP address is in the white list : True 2016 - 05 - 11 13:56:52,075 DEBUG AmbientRuntime - Cookie forwarding for current request is allowed : True 2016 - 05 - 11 13:56:52,075 DEBUG AmbientRuntime - Cookie forwarding for current request is allowed : True 2016 - 05 - 11 13:56:52,075 DEBUG ClaimCookieDeserializer - The list of ClaimsCookies sent to be deserialized is empty ! 2016 - 05 - 11 13:56:52,076 DEBUG AmbientRuntime - Dispatching OnRequestStart event 2016 - 05 - 11 13:56:52,076 DEBUG AmbientRuntime - Dispatching OnRequestStart event 2016 - 05 - 11 13:56:52,076 DEBUG ClaimStore - put : uri = taf : claim : ambientdata : footprintcartridge : devicetype , value = Desktop 2016 - 05 - 11 13:56:52,076 DEBUG ClaimStore - put : uri = taf : claim : ambientdata : footprintcartridge : mobiledevice , value = NotMobile 2016 - 05 - 11 13:56:52,077 DEBUG ClaimStore - put : uri = taf : claim : ambientdata : footprintcartridge : acceptlanguage , value = en - US 2016 - 05 - 11 13:56:52,081 DEBUG ClaimStore - put : uri = taf : claim : context : ui : android , value = false 2016 - 05 - 11 13:56:52,081 DEBUG ClaimStore - put : uri = taf : claim : context : ui : largeBrowser , value = false 2016 - 05 - 11 13:56:52,081 DEBUG ClaimStore - put : uri = taf : claim : context : userRequest : fullUrl , value= 2016 - 05 - 11 13:56:52,081 DEBUG ClaimStore - put : uri = taf : claim : context : os : model , value = Windows 7 2016 - 05 - 11 13:56:52,082 DEBUG ClaimStore - put : uri = taf : claim : context : os : vendor , value= 2016 - 05 - 11 13:56:52,082 DEBUG ClaimStore - put : uri = taf : claim : context : os : variant , value= 2016 - 05 - 11 13:56:52,082 DEBUG ClaimStore - put : uri = taf : claim : context : os : version , value= 2016 - 05 - 11 13:56:52,082 DEBUG ClaimStore - put : uri = taf : claim : context : userHttp : cacheControl , value= 2016 - 05 - 11 13:56:52,082 DEBUG ClaimStore - put : uri = taf : claim : context : device : model , value = Other 2016 - 05 - 11 13:56:52,082 DEBUG ClaimStore - put : uri = taf : claim : context : device : tablet , value = false 2016 - 05 - 11 13:56:52,083 DEBUG ClaimStore - put : uri = taf : claim : context : device : pixelDensity , value=217 2016 - 05 - 11 13:56:52,083 DEBUG ClaimStore - put : uri = taf : claim : context : device : vendor , value = unknown 2016 - 05 - 11 13:56:52,083 DEBUG ClaimStore - put : uri = taf : claim : context : device : inputDevices , value= [ ] 2016 - 05 - 11 13:56:52,083 DEBUG ClaimStore - put : uri = taf : claim : context : device : robot , value = false 2016 - 05 - 11 13:56:52,083 DEBUG ClaimStore - put : uri = taf : claim : context : device : displayWidth , value=800 2016 - 05 - 11 13:56:52,083 DEBUG ClaimStore - put : uri = taf : claim : context : device : variant , value= 2016 - 05 - 11 13:56:52,083 DEBUG ClaimStore - put : uri = taf : claim : context : device : pixelRatio , value=1.0 2016 - 05 - 11 13:56:52,084 DEBUG ClaimStore - put : uri = taf : claim : context : device : version , value= 2016 - 05 - 11 13:56:52,084 DEBUG ClaimStore - put : uri = taf : claim : context : device : mobile , value = false 2016 - 05 - 11 13:56:52,084 DEBUG ClaimStore - put : uri = taf : claim : context : device : displayHeight , value=640 2016 - 05 - 11 13:56:52,084 DEBUG ClaimStore - put : uri = taf : claim : context : browser : imageFormatSupport , value=[PNG , JPEG ] 2016 - 05 - 11 13:56:52,084 DEBUG ClaimStore - put : uri = taf : claim : context : browser : model , value = Chrome 2016 - 05 - 11 13:56:52,084 DEBUG ClaimStore - put : uri = taf : claim : context : browser : scriptSupport , value=[JavaScript ] 2016 - 05 - 11 13:56:52,085 DEBUG ClaimStore - put : uri = taf : claim : context : browser : inputModeSupport , value=[useInputmodeAttribute ] 2016 - 05 - 11 13:56:52,085 DEBUG ClaimStore - put : uri = taf : claim : context : browser : vendor , value = unknown 2016 - 05 - 11 13:56:52,085 DEBUG ClaimStore - put : uri = taf : claim : context : browser : stylesheetSupport , value=[css10 , css21 ] 2016 - 05 - 11 13:56:52,085 DEBUG ClaimStore - put : uri = taf : claim : context : browser : markupSupport , value=[HTML5 ] 2016 - 05 - 11 13:56:52,085 DEBUG ClaimStore - put : uri = taf : claim : context : browser : displayWidth , value=800 2016 - 05 - 11 13:56:52,085 DEBUG ClaimStore - put : uri = taf : claim : context : browser : variant , value= 2016 - 05 - 11 13:56:52,085 DEBUG ClaimStore - put : uri = taf : claim : context : browser : cookieSupport , value = true 2016 - 05 - 11 13:56:52,086 DEBUG ClaimStore - put : uri = taf : claim : context : browser : version , value=49.0.2623 2016 - 05 - 11 13:56:52,086 DEBUG ClaimStore - put : uri = taf : claim : context : browser : displayHeight , value=640 2016 - 05 - 11 13:56:52,086 DEBUG ClaimStore - put : uri = taf : claim : context : browser : jsVersion , value=1.8.5 2016 - 05 - 11 13:56:52,086 DEBUG ClaimStore - put : uri = taf : claim : context : browser : inputDevices , value= [ ] 2016 - 05 - 11 13:56:52,086 DEBUG ClaimStore - put : uri = taf : claim : context : browser : cssVersion , value=2.1 2016 - 05 - 11 13:56:52,086 DEBUG ClaimStore - put : uri = taf : claim : context : browser : modelAndOS , value = Windows 7 Chrome 2016 - 05 - 11 13:56:52,086 DEBUG ClaimStore - put : uri = taf : claim : context : browser : displayColorDepth , value=16 2016 - 05 - 11 13:56:52,087 DEBUG ClaimStore - put : uri = taf : claim : context : browser : preferredHtmlContentType , value = text / html 2016 - 05 - 11 13:56:52,087 DEBUG ClaimStore - put : uri = taf : claim : context : userServer : serverPort , value= 2016 - 05 - 11 13:56:52,087 DEBUG ClaimStore - put : uri = taf : claim : context : userServer : remoteUser , value= 2016 - 05 - 11 13:56:52,087 DEBUG ClaimStore - put : uri = taf : claim : context : INTERNAL1 , value = com.sdl.context.engine . ImmutableContextMap@438dda04 2016 - 05 - 11 13:56:52,089 DEBUG BinaryMetaFactory - Finding binary by url MYPUBID , /images / MYIMAGE.png 2016 - 05 - 11 13:56:52,089 DEBUG SessionManagerImpl - No session opened for the current execution thread . 2016 - 05 - 11 13:56:52,089 DEBUG SessionProxyMethodHandler - Intercepted call to ' findByURL ' while not in session . 2016 - 05 - 11 13:56:52,102 DEBUG BinaryMetaFactory - BinaryMetaFactory : Retrieved BinaryMeta instance : 7225 2016 - 05 - 11 13:56:52,103 DEBUG ComponentMetaFactory - Started Retrieving ComponentMeta instance 2016 - 05 - 11 13:56:52,103 DEBUG SessionManagerImpl - No session opened for the current execution thread . 2016 - 05 - 11 13:56:52,103 DEBUG SessionProxyMethodHandler - Intercepted call to ' findByPrimaryKey ' while not in session . 2016 - 05 - 11 13:56:52,111 DEBUG ComponentMetaFactory - Retrieved ComponentMeta instance : com.tridion.storage.mapper.ComponentMetaImpl@485b8e62 2016 - 05 - 11 13:56:52,170 DEBUG AmbientRuntime - Ambient Data context initialization . 2016 - 05 - 11 13:56:52,170 DEBUG AmbientRuntime - Ambient Data context initialization . 2016 - 05 - 11 13:56:52,171 DEBUG WebContext - setCurrentClaimStore : com.tridion.ambientdata.dotnet.DotNetClaimStore@72e0e90a , thread : Thread-0
context_85
We have requirement where we need to disable creating transactions in some publications . We can not remove these publications from publication target , because we need them , but we would like to do it via event system for example on transaction save args . I have tried in ` EventPhase . Initiated ` , but nothing seems to work . Has anyone had similar requirement and how did they achieved it ?
context_86
I am following the below url to configure dd4 t 2.0 to consume dxa model service : http://blog.trivident.com/switching-to-the-dxa-2-model-service-in-your-dd4t-application/ As per above article , when we try to validate the model service to read content published from dd4 t 2.0 templates using Postman , we are gettiing the below error : { " timestamp " : [ " String " , " 2018 - 06 - 01T07:50:12.821 + 0000 " ] , " status " : 500 , " error " : " Internal Server Error " , " exception " : " com.sdl.webapp.common.api.content.ContentProviderException " , " message " : " Could n't deserialize DD4 T content for request PageRequestDto(publicationId=32 , uriType = tcm , path=/somepage / index.htm , includePages = INCLUDE , contentType = MODEL , dataModelType = R2 , expansionDepth=100 , depthCounter = com.sdl.dxa.common.dto . DepthCounter@9f ) " , " path " : " /PageModel / tcm/32/somepage / index.htm " } The content is coming fine when we are using the content service url in compressed json format Can you please help in resolving the above error .
context_87
I need to design a faceted search against a number of products . The content is tagged up with metadata using the C&K functionality . Normally I 'd use something like Elasticsearch or SOLR to deal with the faceting based on this metadata but in this case it 's not an option that 's available to me . Speaking to various people , I 'm hearing a general theme that the API is sub - optimal for doing this sort of work and you 'd have to construct the facets yourself from the taxonomy using many API calls . I appreciate the genericity of this question but has anyone successfully implemented this ? What are the pitfalls ? M
context_88
We have issue where our database transactions get locked during mass publishing on live . We are running broker db in mssql . We have following configuration in cd_deployer_conf.xml file : We also do publishing on 2 destinations . What we observe is that our transactions get locked in database during mass publishing on both destinations , and transactions just stop changing states . What we need to do then is remote to DB server and manually kill db transactions . We contacted SDL support and they suggest decreasing number of workers , which is unacceptable to us because of performance . Has anyone observed similar issue and how they fixed it ? Thanks in advance .
context_89
I 'm following a [ suggestion from Dominic Cronin][1 ] on setting up multiple host headers to help demonstrating different users on the same CMS , but I 'm stuck at the login pop - up . 1 . As an administrator , I added the following to : ` C:\Windows\System32\drivers\etc\hosts ` and saved the changes . 127.0.0.1 authors.tridion.local 127.0.0.1 admin.tridion.local 1 . In the Site Bindings for SDL Tridion 2011 ( my CMS installation ) in IIS 7.5 , I have : * * For authors * * : - Type : http - IP address : ` All Unssigned ` ( i.e. " * " ) - Port : 80 - Host name : authors.tridion.local * * For admins * * : - Type : http - IP address : ` All Unssigned ` ( i.e. " * " ) - Port : 80 - Host name : admin.tridion.local When attempting to visit ` http://authors.tridion.local ` or ` http://admin.tridion.local ` in Chrome , I 'm prompted to log in ( so the host and IIS setup seems to be partially working ) , but entering a valid user name and password wo n't get me past the login pop - up . Cancelling the login pop - up gives me the IIS error : HTTP Error 401.2 - Unauthorized You are not authorized to view this page due to invalid authentication headers . Additional info : * I 've tried " username " and " .\username " . * I 'm logged in as an admin on the system and have additional users with passwords setup on the same VM . * Application pool for Site ` SDL Tridion 2011 ` is ` SDL Tridion 2011 ` with physical path set to ` C:\Program Files(x86)\Tridion\web ` . This application pool is set to .NET v4.0 as ` Integrated ` for pipeline mode . * When attempting to log in to one of these additional urls , my already logged in administrator ( on ` http://localhost ` ) , will see authorization errors such as ` /WebUI / Models / TCM54/Services / General.svc / GetItem failed to execute . STATUS ( 500 ) : System . ServiceModel . ServiceActivationException ` . I 'm thinking I 'm missing something with the ports ( currently set to 80 ) , folder permissions , or an appropriate restart . * * How do I set up separate urls for different users on a Tridion CMS isntance / VM ? * * [ 1 ] : http://tridion.stackexchange.com/a/1162/46
context_90
I have recently done a rename favorites feature for Tridion ( http://stackoverflow.com/questions/14037820/tridion-favorites-ability-to-rename-favorite-links ) and in the process I am intercepting " AddUri " command and posing a prompt for the new name for the favorite . So while testing the feature we have found that AddUri command is sometimes not called for the " Send to My Favorites " click , at least our prompt was nt showing up in some of the cases . For example : I went to 06 level and added a page to favorites , my prompt showup and I gave it a new name . Then I deleted it from the favorites . Then I added the page again to the favorites . This time my prompt for a new name does nt showup . Is there a different command or cache that I need to look at ?
context_91
I 'm working on upgrading our Tridion 2011 SP1 installation to 2013 SP1 . When I did my initial test performing the upgrade I did not install the Legacy Pack ( as I was pretty certain we had n't implemented any legacy templates ) . When I got to the stage of testing publishing , I was receiving " Success " in the Publishing Queue . However , when I navigated to a folder with my Dynamic Component Templates I received the error regarding " Unable to get template type with legacy i d . " So , I then installed the Legacy Pack . However , after that I began to have publishing problems - the item seems to successfully publish ( it is updated on the server ) , but the Publishing Queue is stuck on " Deploying " . Checking the logs on both ends of the deployer do n't show any problems . The publisher has entries like " DestinationController - All Deployer endpoints have completed , setting transaction to completed " with the correct transaction ID . Similarly , the HTTP Upload website 's log has a en entry " TransactionManager - Finished handling of Deployment package " along with clean - up references . * * If I uninstall the Legacy Pack - Publishing finishes properly with " Success . " Reinstall and it gets stuck on " Deploying " ... * * My intention is to actually remove the dependencies on the Legacy Pack , but I was hoping someone could enlighten me as to why publishing gets hung up when the Legacy Pack is installed .
context_92
I installed Content Manager Server ( SDL Web 8) with the Integrated Security database setting . How can I change the user that was provided in the installer ?
context_93
I have Content Porter SP2 running on Tridion 2011 SP1 . When I try to connect to my instance of tridion to do an export , I get the message > Licensing error , your license has expired I checked ` Tridion\licenses\content porter\cp_license.xml ` It shows the following : [ license ] I see the creation date as 2013 - 8 - 8 . Is there any reason why I should be getting an expired license so soon ?
context_94
We are working on a web application which retrieves data using ODATA services and hence we do not require any CD license on our content delivery web application . We are looking for a possibility of using contextual image delivery engine to provide multi variant images for different devices where based on transformation url different images can be retrieved . Do we require cd license on our web application for using CID ?
context_95
! [ enter image description here][1 ] [ 1 ] : http://i.stack.imgur.com/PVfXv.png I am getting above error while starting sdl smart target 2011 sp1 . Please assist me ..
context_96
I have two PDF files uploaded as * * Multimedia components * * into the folder set up as the Images default path of the Publication . One of such PDF files can be open directly on the browser in a standard way , ex : > http://domain.com/en/images/file1.pdf . The pdf files opens up and shows its content . However the second PDF file ( or multimedia component ) , which was created in the same way and it 's located in the same folder , when being opened in the browser like this > http://domain.com/en/images/file2.pdf displays a * * 404 Error * * " * The resource you are looking for might have been removed , had its name changed , or is temporarily unavailable*. " I double and triple checked the settings and XMLs for each multimedia component and they look the same to me . Why is this ?
context_97
I have everything(CM , CD , Fredhopper ) installed on a single system for a java based environment . I have done all the configuration . While creating the new promotion through Targeting Tab I am getting error message showing An error occurred while processing this request . The appropriate post claim has not been set. Detailed error message is : at Tridion . Web . UI.Models . SmartTarget . Services . SavePromotion(String publicationTargetId , String promotionXml ) at SyncInvokeSavePromotion(Object , Object [ ] , Object [ ] ) at System . ServiceModel . Dispatcher . SyncMethodInvoker . Invoke(Object instance , Object [ ] inputs , Object [ ] & outputs ) at Tridion . Web . UI.Core . Extensibility . DataExtenderOperationInvoker . Invoke(Object instance , Object [ ] inputs , Object [ ] & outputs ) at System . ServiceModel . Dispatcher . DispatchOperationRuntime . InvokeBegin(MessageRpc & rpc ) at System . ServiceModel . Dispatcher . ImmutableDispatchRuntime . ProcessMessage5(MessageRpc & rpc ) at System . ServiceModel . Dispatcher . ImmutableDispatchRuntime . ProcessMessage31(MessageRpc & rpc ) at System . ServiceModel . Dispatcher . MessageRpc . Process(Boolean isOperationContextSet ) Can anyone suggest the possible root cause of the error . Thanks in advance ..
context_98
I have recently done a rename favorites feature for Tridion ( http://stackoverflow.com/questions/14037820/tridion-favorites-ability-to-rename-favorite-links ) and in the process I am intercepting " AddUri " command and posing a prompt for the new name for the favorite . So while testing the feature we have found that AddUri command is sometimes not called for the " Send to My Favorites " click , at least our prompt was nt showing up in some of the cases . For example : I went to 06 level and added a page to favorites , my prompt showup and I gave it a new name . Then I deleted it from the favorites . Then I added the page again to the favorites . This time my prompt for a new name does nt showup . Is there a different command or cache that I need to look at ?
context_99
Fun , easy question for those on SDL Tridion 2013 . In SDL Tridion 2011 , there was an executable to rebuild the Solr search collection . How do we re - index search in [ tag:2013 ] ( no peeking at the tags ) ?
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
-